0% found this document useful (0 votes)
7 views

DL2 - Jupyter Notebook

The document outlines the implementation of a feedforward neural network using Keras and TensorFlow, specifically for the MNIST dataset. It details the steps of importing necessary packages, loading data, defining the network architecture, training the model with stochastic gradient descent, and evaluating its performance. The results include training loss and accuracy plots, along with a classification report showing the model's performance metrics.

Uploaded by

gauravbisht82754
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

DL2 - Jupyter Notebook

The document outlines the implementation of a feedforward neural network using Keras and TensorFlow, specifically for the MNIST dataset. It details the steps of importing necessary packages, loading data, defining the network architecture, training the model with stochastic gradient descent, and evaluating its performance. The results include training loss and accuracy plots, along with a classification report showing the model's performance metrics.

Uploaded by

gauravbisht82754
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

10/16/24, 2:59 PM DL2 - Jupyter Notebook

Implementing Feedforward neural networks with Keras and TensorFlow a. Import the
necessary packages b. Load the training and testing data (MNIST/CIFAR10) c. Define the
network architecture using Keras d. Train the model using SGD e. Evaluate the network f.
Plot the training loss and accuracy

In [2]: #installations
from sklearn.preprocessing import LabelBinarizer
from sklearn.metrics import classification_report
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.datasets import mnist
from tensorflow.keras import backend as K
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings("ignore")

In [3]: #grabbing the mnist dataset


((X_train, Y_train), (X_test, Y_test)) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0], 28 * 28 * 1))
X_test = X_test.reshape((X_test.shape[0], 28 * 28 * 1))
X_train = X_train.astype("float32") / 255.0
X_test = X_test.astype("float32") / 255.0

In [4]: lb = LabelBinarizer()
Y_train = lb.fit_transform(Y_train)
Y_test = lb.transform(Y_test)

In [9]: #building the model


model = Sequential()
model.add(Dense(128, input_shape=(784,), activation="sigmoid"))
model.add(Dense(64, activation="sigmoid"))
model.add(Dense(10, activation="softmax"))
model.summary()

Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 128) 100480

dense_7 (Dense) (None, 64) 8256

dense_8 (Dense) (None, 10) 650

=================================================================
Total params: 109386 (427.29 KB)
Trainable params: 109386 (427.29 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

localhost:8888/notebooks/Downloads/Deep-learning-main/DL2.ipynb 1/5
10/16/24, 2:59 PM DL2 - Jupyter Notebook

In [10]: sgd = SGD(0.02)


epochs=20
model.compile(loss="categorical_crossentropy", optimizer=sgd,metrics=["accu
H = model.fit(X_train, Y_train, validation_data=(X_test, Y_test),epochs=epo

localhost:8888/notebooks/Downloads/Deep-learning-main/DL2.ipynb 2/5
10/16/24, 2:59 PM DL2 - Jupyter Notebook

Epoch 1/20
469/469 [==============================] - 2s 3ms/step - loss: 2.2552 - ac
curacy: 0.2486 - val_loss: 2.1963 - val_accuracy: 0.4051
Epoch 2/20
469/469 [==============================] - 1s 2ms/step - loss: 2.1205 - ac
curacy: 0.4956 - val_loss: 2.0132 - val_accuracy: 0.5698
Epoch 3/20
469/469 [==============================] - 1s 2ms/step - loss: 1.8644 - ac
curacy: 0.5861 - val_loss: 1.6742 - val_accuracy: 0.6286
Epoch 4/20
469/469 [==============================] - 1s 2ms/step - loss: 1.4991 - ac
curacy: 0.6398 - val_loss: 1.3131 - val_accuracy: 0.6951
Epoch 5/20
469/469 [==============================] - 1s 2ms/step - loss: 1.1908 - ac
curacy: 0.7068 - val_loss: 1.0600 - val_accuracy: 0.7372
Epoch 6/20
469/469 [==============================] - 1s 2ms/step - loss: 0.9822 - ac
curacy: 0.7592 - val_loss: 0.8896 - val_accuracy: 0.7862
Epoch 7/20
469/469 [==============================] - 1s 2ms/step - loss: 0.8384 - ac
curacy: 0.7952 - val_loss: 0.7692 - val_accuracy: 0.8129
Epoch 8/20
469/469 [==============================] - 1s 2ms/step - loss: 0.7343 - ac
curacy: 0.8203 - val_loss: 0.6796 - val_accuracy: 0.8298
Epoch 9/20
469/469 [==============================] - 1s 2ms/step - loss: 0.6570 - ac
curacy: 0.8366 - val_loss: 0.6134 - val_accuracy: 0.8437
Epoch 10/20
469/469 [==============================] - 1s 3ms/step - loss: 0.5985 - ac
curacy: 0.8493 - val_loss: 0.5619 - val_accuracy: 0.8542
Epoch 11/20
469/469 [==============================] - 1s 2ms/step - loss: 0.5533 - ac
curacy: 0.8577 - val_loss: 0.5213 - val_accuracy: 0.8630
Epoch 12/20
469/469 [==============================] - 1s 2ms/step - loss: 0.5174 - ac
curacy: 0.8657 - val_loss: 0.4897 - val_accuracy: 0.8719
Epoch 13/20
469/469 [==============================] - 1s 3ms/step - loss: 0.4881 - ac
curacy: 0.8717 - val_loss: 0.4630 - val_accuracy: 0.8773
Epoch 14/20
469/469 [==============================] - 1s 2ms/step - loss: 0.4642 - ac
curacy: 0.8769 - val_loss: 0.4419 - val_accuracy: 0.8803
Epoch 15/20
469/469 [==============================] - 1s 2ms/step - loss: 0.4442 - ac
curacy: 0.8818 - val_loss: 0.4233 - val_accuracy: 0.8865
Epoch 16/20
469/469 [==============================] - 1s 3ms/step - loss: 0.4274 - ac
curacy: 0.8856 - val_loss: 0.4074 - val_accuracy: 0.8891
Epoch 17/20
469/469 [==============================] - 1s 2ms/step - loss: 0.4131 - ac
curacy: 0.8887 - val_loss: 0.3945 - val_accuracy: 0.8930
Epoch 18/20
469/469 [==============================] - 1s 2ms/step - loss: 0.4006 - ac
curacy: 0.8914 - val_loss: 0.3838 - val_accuracy: 0.8957
Epoch 19/20
469/469 [==============================] - 1s 3ms/step - loss: 0.3900 - ac
curacy: 0.8932 - val_loss: 0.3738 - val_accuracy: 0.8967
Epoch 20/20
469/469 [==============================] - 1s 2ms/step - loss: 0.3805 - ac
curacy: 0.8961 - val_loss: 0.3646 - val_accuracy: 0.8982

localhost:8888/notebooks/Downloads/Deep-learning-main/DL2.ipynb 3/5
10/16/24, 2:59 PM DL2 - Jupyter Notebook

In [11]: #making the predictions


predictions = model.predict(X_test, batch_size=128)
print(classification_report(Y_test.argmax(axis=1),predictions.argmax(axis=1

79/79 [==============================] - 0s 2ms/step


precision recall f1-score support

0 0.92 0.97 0.95 980


1 0.96 0.97 0.96 1135
2 0.89 0.88 0.88 1032
3 0.89 0.87 0.88 1010
4 0.88 0.92 0.90 982
5 0.86 0.81 0.84 892
6 0.90 0.93 0.92 958
7 0.91 0.90 0.91 1028
8 0.86 0.85 0.86 974
9 0.89 0.86 0.87 1009

accuracy 0.90 10000


macro avg 0.90 0.90 0.90 10000
weighted avg 0.90 0.90 0.90 10000

In [12]: #plotting the training loss and accuracy


plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(0, epochs), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, epochs), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, epochs), H.history["accuracy"], label="train_acc")
plt.plot(np.arange(0, epochs), H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()

Out[12]: <matplotlib.legend.Legend at 0x256b7f48850>

localhost:8888/notebooks/Downloads/Deep-learning-main/DL2.ipynb 4/5
10/16/24, 2:59 PM DL2 - Jupyter Notebook

localhost:8888/notebooks/Downloads/Deep-learning-main/DL2.ipynb 5/5

You might also like