Deep Learning
Deep Learning
Procedure:
MNIST Handwritten Digit Classification DataSet :
The MNIST dataset is a popular benchmark dataset for image classification tasks. It
consists of 60,000 grayscale images of handwritten digits (0 to 9) for training and 10,000
images for testing. Each image is 28 x 28 pixels in size, and each pixel value ranges from 0
to 255. The goal of the task is to correctly classify each image into one of the 10 possible
digit classes.
In this implementation, we first load the MNIST dataset using the mnist.load_data()
function from Keras
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
In this step, we use the mnist.load_data() function from Keras to load the MNIST dataset.
The training data consists of the x_train images and their corresponding y_train labels,
while the test data consists of the x_test images and their corresponding y_test labels.
# Reshape input data
X_train = X_train.reshape(X_train.shape[0], 28*28)
X_test = X_test.reshape(X_test.shape[0], 28*28)
In this step, we preprocess the data by reshaping the images to 1D arrays, normalizing the
pixel values to be between 0 and 1, and
# Normalize input data
X_train = X_train / 255
X_test = X_test / 255
. We then preprocess the data by flattening the input images into 1D arrays of size 784
(28x28), scaling the pixel values to the range of 0 to 1, and dividing by 255.0 to normalize
the data.
# One-hot encode target variables
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
converting the labels to one-hot encoding using the to_categorical() function from Keras.
Next, we define the neural network model with three fully connected (dense) layers. The first
two hidden layers have 256 and 128 units, respectively, and use ReLU activation functions.
The dropout layers randomly drop out 20% of the input units during training to prevent
overfitting. The output layer has 10 units with softmax activation for multi-class
classification
# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train model
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=128)
We compile the model with the Adam optimizer, sparse categorical cross-entropy loss, and
accuracy metric. We train the model on the training data for 10 epochs with a batch size of
128. Finally, we evaluate the model on the test data and print the accuracy score.
In this step, we define the neural network architecture using the Sequential() class from
Keras. Next, we define the neural network model with three fully connected layers. The
first layer has 128 units with ReLU activation, the second layer has 64 units with ReLU
activation, and the final layer has a single unit with sigmoid activation for binary
classification.
Finally, we can evaluate the performance of the model on the test data using
the evaluate() function from Keras.
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/imdb.npz
17464789/17464789 [==============================] - 1s 0us/step
Epoch 1/10
196/196 [==============================] - 2s 7ms/step - loss: 257.8831 - accuracy:
0.4990 - val_loss: 3.1933 - val_accuracy: 0.5036
Epoch 2/10
196/196 [==============================] - 1s 6ms/step - loss: 18.6795 - accuracy:
0.5094 - val_loss: 0.7013 - val_accuracy: 0.5010
Epoch 3/10
196/196 [==============================] - 1s 6ms/step - loss: 4.7280 - accuracy:
0.4982 - val_loss: 0.6967 - val_accuracy: 0.4960
Epoch 4/10
196/196 [==============================] - 1s 6ms/step - loss: 2.4193 - accuracy:
0.5012 - val_loss: 0.6952 - val_accuracy: 0.4974
Epoch 5/10
196/196 [==============================] - 1s 5ms/step - loss: 1.5178 - accuracy:
0.5043 - val_loss: 0.6936 - val_accuracy: 0.4989
Epoch 6/10
196/196 [==============================] - 1s 6ms/step - loss: 1.2867 - accuracy:
0.5026 - val_loss: 0.6937 - val_accuracy: 0.5003
Epoch 7/10
196/196 [==============================] - 1s 6ms/step - loss: 1.1111 - accuracy:
0.5014 - val_loss: 0.6932 - val_accuracy: 0.5002
Epoch 8/10
196/196 [==============================] - 2s 8ms/step - loss: 1.0110 - accuracy:
0.4982 - val_loss: 0.6932 - val_accuracy: 0.5002
Epoch 9/10
196/196 [==============================] - 1s 7ms/step - loss: 0.8932 - accuracy:
0.4972 - val_loss: 0.6932 - val_accuracy: 0.5004
Epoch 10/10
196/196 [==============================] - 1s 6ms/step - loss: 0.8888 - accuracy:
0.4971 - val_loss: 0.6931 - val_accuracy: 0.5002
Accuracy: 50.02%
EXPERIMENT NO -3
Aim: Design a neural Network for classifying news wires (Multi class classification) using
Reuters dataset
Procedure :
Reuters DataSet :
The Reuters dataset is a collection of newswire articles and their categories. It consists of
11,228 newswire articles that are classified into 46 different topics or categories. The goal of
this task is to train a neural network to accurately classify newswire articles into their
respective categories.
Input layer: This layer will take in the vectorized representation of the news articles in the
Reuters dataset.
Hidden layers: You can use one or more hidden layers with varying number of neurons in
each layer. You can experiment with the number of layers and neurons to find the optimal
configuration for your specific problem.
Output layer: This layer will output a probability distribution over the possible categories for
each input news article. Since this is a multi-class classification problem, you can use a
softmax activation function in the output layer to ensure that the predicted probabilities
sum to 1.
Program:
import numpy as np
from tensorflow.keras.datasets import reuters
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.utils import to_categorical
We will import all the necessary libraries for the model and We will use the Keras library to
load the dataset and preprocess it.
x_train = vectorize_sequences(x_train)
x_test = vectorize_sequences(x_test)
Procedure:
The Boston Housing Price dataset is a collection of 506 samples of housing prices in the
Boston area, where each sample has 13 features such as crime rate, average number of
rooms per dwelling, and others. The goal of this task is to train a neural network to
accurately predict the median value of owner-occupied homes in $1000's.
Input layer: This layer will take in the 13 features of each house.
Hidden layers: You can use one or more hidden layers with varying number of neurons in
each layer. You can experiment with the number of layers and neurons to find the optimal
configuration for your specific problem.
Output layer: This layer will output a single numerical value, which is the predicted price of
the house.
Program:
from tensorflow.keras.datasets import boston_housing
We will import all the necessary libraries for the model and We will use the Keras library to load the dataset and
preprocess it.
We will also split the dataset into training and validation sets.
model = Sequential()
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
The next step is to design the neural network architecture. For this task, we will use a fully connected neural network
with an input layer, multiple hidden layers, and an output layer. We will use the Dense class in Keras to add the layers
to our model. Since this is a regression problem, the output layer will have only one neuron, and we will not use any
activation function
model.compile(optimizer='adam', loss='mse')
Once we have defined the model architecture, the next step is to compile the model. We need to specify the loss
function, optimizer, and evaluation metrics for the model. Since this is a regression problem, we will use the
mean_squared_error loss function. We will use the adam optimizer and mean_absolute_error as the evaluation
metric. Train the model on the training set
epochs=100,
batch_size=32,
validation_data=(x_test, y_test))
After compiling the model, the next step is to train it on the training data. We will use the fit method in Keras to train
the model. We will also specify the validation data and the batch size.
Once the model is trained, the next step is to evaluate its performance on the test data. We will use the evaluate
method in Keras to evaluate the model.
Output:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-
datasets/boston_housing.npz
57026/57026 [==============================] - 0s 0us/step
Epoch 1/100
13/13 [==============================] - 3s 71ms/step - loss: 581.6057 - val_loss:
600.8628
Epoch 2/100
13/13 [==============================] - 0s 10ms/step - loss: 568.9142 - val_loss:
578.0995
Epoch 3/100
13/13 [==============================] - 0s 11ms/step - loss: 547.0474 - val_loss:
539.9392
Epoch 4/100
13/13 [==============================] - 0s 9ms/step - loss: 510.9485 - val_loss:
479.1323
Epoch 5/100
13/13 [==============================] - 0s 6ms/step - loss: 455.0125 - val_loss:
393.1474
Epoch 6/100
13/13 [==============================] - 0s 7ms/step - loss: 379.5060 - val_loss:
288.2769
Epoch 7/100
13/13 [==============================] - 0s 6ms/step - loss: 289.0050 - val_loss:
187.7191
Epoch 8/100
13/13 [==============================] - 0s 6ms/step - loss: 198.9047 - val_loss:
123.9099
Epoch 9/100
13/13 [==============================] - 0s 5ms/step - loss: 133.4827 - val_loss:
118.9333
Epoch 10/100
13/13 [==============================] - 0s 5ms/step - loss: 103.5629 - val_loss:
148.6580
Epoch 11/100
13/13 [==============================] - 0s 5ms/step - loss: 96.0384 - val_loss:
157.2076
Epoch 12/100
13/13 [==============================] - 0s 6ms/step - loss: 94.3371 - val_loss:
151.7262
Epoch 13/100
13/13 [==============================] - 0s 6ms/step - loss: 91.8126 - val_loss:
139.2459
Epoch 14/100
13/13 [==============================] - 0s 5ms/step - loss: 89.3874 - val_loss:
128.9134
Epoch 15/100
13/13 [==============================] - 0s 7ms/step - loss: 87.6433 - val_loss:
122.3881
Epoch 16/100
13/13 [==============================] - 0s 7ms/step - loss: 85.5203 - val_loss:
118.5746
Epoch 17/100
13/13 [==============================] - 0s 6ms/step - loss: 83.6930 - val_loss:
117.3677
Epoch 18/100
13/13 [==============================] - 0s 6ms/step - loss: 81.8004 - val_loss:
107.7317
Epoch 19/100
13/13 [==============================] - 0s 6ms/step - loss: 80.0276 - val_loss:
108.8664
Epoch 20/100
13/13 [==============================] - 0s 6ms/step - loss: 78.1314 - val_loss:
101.8108
Epoch 21/100
13/13 [==============================] - 0s 5ms/step - loss: 76.4282 - val_loss:
97.4463
Epoch 22/100
13/13 [==============================] - 0s 7ms/step - loss: 74.9360 - val_loss:
96.3559
Epoch 23/100
13/13 [==============================] - 0s 6ms/step - loss: 73.3877 - val_loss:
87.7806
Epoch 24/100
13/13 [==============================] - 0s 7ms/step - loss: 71.7510 - val_loss:
89.5797
Epoch 25/100
13/13 [==============================] - 0s 8ms/step - loss: 70.0859 - val_loss:
84.9000
Epoch 26/100
13/13 [==============================] - 0s 9ms/step - loss: 68.7180 - val_loss:
81.1061
Epoch 27/100
13/13 [==============================] - 0s 8ms/step - loss: 67.2199 - val_loss:
80.6916
Epoch 28/100
13/13 [==============================] - 0s 6ms/step - loss: 65.8538 - val_loss:
78.3895
Epoch 29/100
13/13 [==============================] - 0s 6ms/step - loss: 64.5018 - val_loss:
75.4445
Epoch 30/100
13/13 [==============================] - 0s 9ms/step - loss: 63.2297 - val_loss:
75.0658
Epoch 31/100
13/13 [==============================] - 0s 6ms/step - loss: 62.0135 - val_loss:
72.5331
Epoch 32/100
13/13 [==============================] - 0s 6ms/step - loss: 61.0150 - val_loss:
72.5535
Epoch 33/100
13/13 [==============================] - 0s 6ms/step - loss: 59.7378 - val_loss:
70.0550
Epoch 34/100
13/13 [==============================] - 0s 6ms/step - loss: 58.8055 - val_loss:
71.9888
Epoch 35/100
13/13 [==============================] - 0s 6ms/step - loss: 57.8878 - val_loss:
70.7422
Epoch 36/100
13/13 [==============================] - 0s 6ms/step - loss: 57.4425 - val_loss:
68.0706
Epoch 37/100
13/13 [==============================] - 0s 6ms/step - loss: 56.2824 - val_loss:
73.5046
Epoch 38/100
13/13 [==============================] - 0s 6ms/step - loss: 55.7446 - val_loss:
72.7915
Epoch 39/100
13/13 [==============================] - 0s 6ms/step - loss: 55.0651 - val_loss:
71.6527
Epoch 40/100
13/13 [==============================] - 0s 7ms/step - loss: 54.5890 - val_loss:
71.9477
Epoch 41/100
13/13 [==============================] - 0s 5ms/step - loss: 54.0848 - val_loss:
74.4059
Epoch 42/100
13/13 [==============================] - 0s 6ms/step - loss: 53.6057 - val_loss:
72.6524
Epoch 43/100
13/13 [==============================] - 0s 5ms/step - loss: 53.2400 - val_loss:
72.7631
Epoch 44/100
13/13 [==============================] - 0s 7ms/step - loss: 52.7545 - val_loss:
74.9595
Epoch 45/100
13/13 [==============================] - 0s 7ms/step - loss: 52.3954 - val_loss:
74.4425
Epoch 46/100
13/13 [==============================] - 0s 8ms/step - loss: 51.8754 - val_loss:
75.4689
Epoch 47/100
13/13 [==============================] - 0s 6ms/step - loss: 51.5840 - val_loss:
75.5107
Epoch 48/100
13/13 [==============================] - 0s 7ms/step - loss: 51.3794 - val_loss:
74.1811
Epoch 49/100
13/13 [==============================] - 0s 7ms/step - loss: 50.7736 - val_loss:
77.1227
Epoch 50/100
13/13 [==============================] - 0s 6ms/step - loss: 50.6973 - val_loss:
75.3571
Epoch 51/100
13/13 [==============================] - 0s 7ms/step - loss: 50.3545 - val_loss:
75.5459
Epoch 52/100
13/13 [==============================] - 0s 9ms/step - loss: 50.0110 - val_loss:
74.9539
Epoch 53/100
13/13 [==============================] - 0s 7ms/step - loss: 49.5859 - val_loss:
75.7610
Epoch 54/100
13/13 [==============================] - 0s 7ms/step - loss: 49.2332 - val_loss:
75.3416
Epoch 55/100
13/13 [==============================] - 0s 7ms/step - loss: 48.9296 - val_loss:
74.7842
Epoch 56/100
13/13 [==============================] - 0s 7ms/step - loss: 48.5693 - val_loss:
73.6319
Epoch 57/100
13/13 [==============================] - 0s 7ms/step - loss: 48.2237 - val_loss:
74.5521
Epoch 58/100
13/13 [==============================] - 0s 5ms/step - loss: 47.9942 - val_loss:
74.2557
Epoch 59/100
13/13 [==============================] - 0s 5ms/step - loss: 47.6148 - val_loss:
72.9914
Epoch 60/100
13/13 [==============================] - 0s 7ms/step - loss: 47.3916 - val_loss:
73.7292
Epoch 61/100
13/13 [==============================] - 0s 5ms/step - loss: 47.1164 - val_loss:
71.3105
Epoch 62/100
13/13 [==============================] - 0s 7ms/step - loss: 46.8377 - val_loss:
73.0056
Epoch 63/100
13/13 [==============================] - 0s 7ms/step - loss: 46.1684 - val_loss:
71.6649
Epoch 64/100
13/13 [==============================] - 0s 7ms/step - loss: 45.9518 - val_loss:
71.3532
Epoch 65/100
13/13 [==============================] - 0s 6ms/step - loss: 45.8064 - val_loss:
71.3245
Epoch 66/100
13/13 [==============================] - 0s 6ms/step - loss: 45.3031 - val_loss:
70.0585
Epoch 67/100
13/13 [==============================] - 0s 7ms/step - loss: 44.8652 - val_loss:
70.1172
Epoch 68/100
13/13 [==============================] - 0s 6ms/step - loss: 44.7074 - val_loss:
69.8758
Epoch 69/100
13/13 [==============================] - 0s 5ms/step - loss: 44.4409 - val_loss:
69.5317
Epoch 70/100
13/13 [==============================] - 0s 7ms/step - loss: 43.9173 - val_loss:
68.3756
Epoch 71/100
13/13 [==============================] - 0s 7ms/step - loss: 43.6754 - val_loss:
68.1564
Epoch 72/100
13/13 [==============================] - 0s 7ms/step - loss: 43.0529 - val_loss:
67.8396
Epoch 73/100
13/13 [==============================] - 0s 6ms/step - loss: 42.8460 - val_loss:
67.5689
Epoch 74/100
13/13 [==============================] - 0s 8ms/step - loss: 42.5708 - val_loss:
67.1981
Epoch 75/100
13/13 [==============================] - 0s 7ms/step - loss: 42.2138 - val_loss:
66.7805
Epoch 76/100
13/13 [==============================] - 0s 7ms/step - loss: 41.8676 - val_loss:
66.0953
Epoch 77/100
13/13 [==============================] - 0s 7ms/step - loss: 41.4192 - val_loss:
65.9270
Epoch 78/100
13/13 [==============================] - 0s 8ms/step - loss: 40.9319 - val_loss:
65.6209
Epoch 79/100
13/13 [==============================] - 0s 6ms/step - loss: 40.6032 - val_loss:
64.8652
Epoch 80/100
13/13 [==============================] - 0s 8ms/step - loss: 40.3409 - val_loss:
65.0239
Epoch 81/100
13/13 [==============================] - 0s 6ms/step - loss: 40.0193 - val_loss:
64.4650
Epoch 82/100
13/13 [==============================] - 0s 6ms/step - loss: 39.5756 - val_loss:
64.0286
Epoch 83/100
13/13 [==============================] - 0s 6ms/step - loss: 39.0019 - val_loss:
63.1940
Epoch 84/100
13/13 [==============================] - 0s 6ms/step - loss: 39.1355 - val_loss:
63.2680
Epoch 85/100
13/13 [==============================] - 0s 6ms/step - loss: 38.4614 - val_loss:
63.4833
Epoch 86/100
13/13 [==============================] - 0s 6ms/step - loss: 38.2256 - val_loss:
62.5658
Epoch 87/100
13/13 [==============================] - 0s 7ms/step - loss: 37.5265 - val_loss:
63.4185
Epoch 88/100
13/13 [==============================] - 0s 7ms/step - loss: 37.2877 - val_loss:
62.3244
Epoch 89/100
13/13 [==============================] - 0s 5ms/step - loss: 37.2992 - val_loss:
61.4046
Epoch 90/100
13/13 [==============================] - 0s 4ms/step - loss: 36.6447 - val_loss:
62.3672
Epoch 91/100
13/13 [==============================] - 0s 4ms/step - loss: 36.2271 - val_loss:
60.2370
Epoch 92/100
13/13 [==============================] - 0s 6ms/step - loss: 36.5574 - val_loss:
61.8120
Epoch 93/100
13/13 [==============================] - 0s 5ms/step - loss: 35.8656 - val_loss:
60.5489
Epoch 94/100
13/13 [==============================] - 0s 4ms/step - loss: 35.2197 - val_loss:
62.9568
Epoch 95/100
13/13 [==============================] - 0s 5ms/step - loss: 34.7053 - val_loss:
61.1308
Epoch 96/100
13/13 [==============================] - 0s 4ms/step - loss: 34.6180 - val_loss:
62.5150
Epoch 97/100
13/13 [==============================] - 0s 5ms/step - loss: 34.1021 - val_loss:
62.0751
Epoch 98/100
13/13 [==============================] - 0s 5ms/step - loss: 33.7709 - val_loss:
61.6955
Epoch 99/100
13/13 [==============================] - 0s 5ms/step - loss: 33.4811 - val_loss:
61.1410
Epoch 100/100
13/13 [==============================] - 0s 6ms/step - loss: 33.0442 - val_loss:
61.5967
4/4 [==============================] - 0s 3ms/step - loss: 61.5967
Test loss: 61.5966682434082
EXPERIMENT NO – 5
Aim: Build a Convolution Neural Network for MNIST Hand written Digit Classification.
In this implementation, we first load the MNIST dataset using the mnist.load_data()
function from Keras.
In this step, we use the mnist.load_data() function from Keras to load the MNIST dataset.
The training data consists of the x_train images and their corresponding y_train labels,
while the test data consists of the x_test images and their corresponding y_test labels.
In this step, we preprocess the data by reshaping the images to 1D arrays, normalizing the
pixel values to be between 0 and 1, and. . We then preprocess the data by flattening the
input images into 1D arrays of size 784 (28x28), scaling the pixel values to the range of 0 to
1, and dividing by 255.0 to normalize the data.
import numpy as np
model = Sequential()
model.add(MaxPooling2D((2, 2)))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
The next step is to define the CNN architecture. For this task, we will use a simple CNN
architecture with three convolutional layers with ‘relu’ activation function and followed by
two max pooling layers, then a flatten layer and two fully connected (dense) layers. The final
output layer will have 10 neurons, one for each digit class, and we will use the softmax
activation function to produce probabilities for each class.
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
We compile the model with the Adam optimizer, sparse_categorical_crossentropy loss, and
accuracy metric.
# Train the model
We train the model on the training data for 10 epochs with a batch size of 128. Finally, we
evaluate the model on the test data and print the accuracy score.
OutPut:
Epoch 1/10
1875/1875 [==============================] - 72s 38ms/step - loss: 0.2613 - accuracy: 0.9220 - val_loss: 0.0447
- val_accuracy: 0.9859
Epoch 2/10
1875/1875 [==============================] - 66s 35ms/step - loss: 0.0927 - accuracy: 0.9752 - val_loss: 0.0491
- val_accuracy: 0.9852
Epoch 3/10
1875/1875 [==============================] - 69s 37ms/step - loss: 0.0650 - accuracy: 0.9821 - val_loss: 0.0339
- val_accuracy: 0.9892
Epoch 4/10
1875/1875 [==============================] - 68s 36ms/step - loss: 0.0516 - accuracy: 0.9855 - val_loss: 0.0328
- val_accuracy: 0.9895
Epoch 5/10
1875/1875 [==============================] - 68s 36ms/step - loss: 0.0413 - accuracy: 0.9887 - val_loss: 0.0315
- val_accuracy: 0.9907
Epoch 6/10
1875/1875 [==============================] - 65s 35ms/step - loss: 0.0359 - accuracy: 0.9898 - val_loss: 0.0270
- val_accuracy: 0.9925
Epoch 7/10
1875/1875 [==============================] - 65s 35ms/step - loss: 0.0312 - accuracy: 0.9910 - val_loss: 0.0278
- val_accuracy: 0.9920
Epoch 8/10
1875/1875 [==============================] - 68s 36ms/step - loss: 0.0283 - accuracy: 0.9920 - val_loss: 0.0365
- val_accuracy: 0.9908
Epoch 9/10
1875/1875 [==============================] - 65s 35ms/step - loss: 0.0233 - accuracy: 0.9931 - val_loss: 0.0324
- val_accuracy: 0.9927
Epoch 10/10
1875/1875 [==============================] - 67s 36ms/step - loss: 0.0197 - accuracy: 0.9938 - val_loss: 0.0346
- val_accuracy: 0.9941
Program:
# import the libraries as shown below
in this step we are importing the required modules from keras and tensorflow . Here we are using the pre-
defined neural network called VGG16 .
In this step we are mounting our Google colab with our drive
Mounted at /content/drive
ROOT_PATH = '/content/drive/MyDrive/cat-dog-project-20230412T173959Z-001/cat-dog-
project'
!pwd
/content
import os
os.chdir(ROOT_PATH)
os.getcwd()
/content/drive/MyDrive/cat-dog-project-20230412T173959Z-001/cat-dog-project
train_path = 'PetImages/train'
valid_path = 'PetImages/validation'
# Import the VGG16 library as shown below and add preprocessing layer to the front
of VGG
# Here we will be using imagenet weights
OutPut:
Downloading data from https://storage.googleapis.com/tensorflow/keras-
applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58889256/58889256 [==============================] - 0s 0us/step
Output:
<keras.engine.input_layer.InputLayer object at 0x7fa764e92190>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa764e92b20>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7645e4310>
<keras.layers.pooling.max_pooling2d.MaxPooling2D object at 0x7fa7645ae6d0>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7645e4910>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7645c6880>
<keras.layers.pooling.max_pooling2d.MaxPooling2D object at 0x7fa7640cf4f0>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7645c61f0>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640d2970>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640d9610>
<keras.layers.pooling.max_pooling2d.MaxPooling2D object at 0x7fa7640df700>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640e5190>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640e5d30>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640df8b0>
<keras.layers.pooling.max_pooling2d.MaxPooling2D object at 0x7fa7640efca0>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640ef760>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa7640d9970>
<keras.layers.convolutional.conv2d.Conv2D object at 0x7fa76407c520>
<keras.layers.pooling.max_pooling2d.MaxPooling2D object at 0x7fa7640f57f0>
Output:
input_1 False
block1_conv1 False
block1_conv2 False
block1_pool False
block2_conv1 False
block2_conv2 False
block2_pool False
block3_conv1 False
block3_conv2 False
block3_conv3 False
block3_pool False
block4_conv1 False
block4_conv2 False
block4_conv3 False
block4_pool False
block5_conv1 False
block5_conv2 False
block5_conv3 False
block5_pool False
vgg16.summary()
Output:
Model: "vgg16"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
=================================================================
Total params: 14,714,688
Trainable params: 0
Non-trainable params: 14,714,688
len(folders)
model = Sequential()
model.add(vgg16)
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(2,activation='softmax'))
=================================================================
Total params: 21,137,986
Trainable params: 6,423,298
Non-trainable params: 14,714,688
# Use the Image Data Generator to import the images from the dataset
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Make sure you provide the same target size as initialied for the image size
training_set = train_datagen.flow_from_directory('PetImages/train',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('PetImages/validation',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical')
# save it as a h5 file
from tensorflow.keras.models import load_model
model.save('model_vgg16.h5')
y_pred = model.predict(test_set)
y_pred
import numpy as np
y_pred = np.argmax(y_pred, axis=1)
y_pred
Z = plt.imread('cat.jpg')
plt.imshow(Z)
<matplotlib.image.AxesImage at 0x7fa6ca436130>
x.shape
x=x/255
from keras.applications.vgg16 import preprocess_input
import numpy as np
x=np.expand_dims(x,axis=0)
img_data=preprocess_input(x)
img_data.shape
model.predict(img_data)
result = np.argmax(model.predict(img_data), axis=1)
result[0]
if result[0] == 1:
prediction = 'dog'
print(prediction)
else:
prediction = 'cat'
print(prediction)
cat
EXPERIMENT N0 – 7
Aim: Use a pre-trained convolution neural network (VGG16) for image classification.
Procedure:
VGG16 is a convolutional neural network (CNN) architecture that was developed by
researchers at the Visual Geometry Group (VGG) at the University of Oxford. It was
introduced in the paper titled "Very Deep Convolutional Networks for Large-Scale Image
Recognition" by Karen Simonyan and Andrew Zisserman in 2014.
Load the pre-trained VGG16 model: You can load the pre-trained VGG16 model by calling the VGG16() function from
the Keras library.
img_path = '/cat.jpg'
img = image.load_img(img_path, target_size=(224, 224))
Load an image for classification: You can use any image that you want to classify.
from google.colab import drive
drive.mount('/content/drive')
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
You'll need to preprocess the image before feeding it to the VGG16 model. You can do this using the
preprocess_input() function from the Keras library.
preds = model.predict(x)
Predict the class of the image: You can predict the class of the image using the predict() function of the VGG16
model.
OutPut:
Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/imagenet_class_index.json
35363/35363 [==============================] - 0s 0us/step
1. tabby: 44.72%
2. tiger_cat: 42.62%
3. Egyptian_cat: 6.74%
EXPERIMENT N0 – 8
Aim: Implement one hot encoding of words or characters.
Procedure :
One-hot encoding is a technique used to represent categorical data as numerical data. In
the context of natural language processing (NLP), one-hot encoding can be used to
represent words or characters as vectors of numbers.
In one-hot encoding, each word or character is assigned a unique index, and a vector of
zeros is created with the length equal to the total number of words or characters in the
vocabulary. The index of the word or character is set to 1 in the corresponding position in
the vector, and all other positions are set to 0.
For example, suppose we have a vocabulary of four words: "apple", "banana", "cherry", and
"date". Each word is assigned a unique index: 0, 1, 2, and 3, respectively. The one-hot
encoding of the word "banana" would be [0, 1, 0, 0], because it is in the second position in
the vocabulary.
In Python, we can implement one-hot encoding using the
keras.preprocessing.text.one_hot() function from the Keras library. This function takes as
input a list of text strings, the size of the vocabulary, and a hash function to convert words
to integers. It returns a list of one-hot encoded vectors.
Program:
from tensorflow.keras.preprocessing.text import one_hot
print(one_hot_words)
Output:
[[0, 0, 1], [1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 0, 1]]
import string
print(one_hot_chars)
Output:
[[0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0], [1,
0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 1,
0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0,
0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0]]
EXPERIMENT NO – 9
Aim: Implement word embeddings for IMDB dataset
Procedure :
Word embeddings are a type of representation learning that can be used to convert words
into numerical vectors. In natural language processing (NLP), word embeddings are
commonly used to represent words as dense vectors in a high-dimensional space. This
allows us to perform various NLP tasks such as text classification, sentiment analysis, and
language translation.
In this example, we will implement word embeddings for the IMDB dataset, which consists
of movie reviews labeled as positive or negative. We will use the Keras library to implement
the word embeddings.
First, we will load the IMDB dataset using Keras. The dataset consists of 50,000 movie
reviews, with 25,000 reviews for training and 25,000 reviews for testing. Each review is a
sequence of words, and the label is either 0 (negative) or 1 (positive).
Program:
from keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Embedding
# Load the IMDB dataset and split it into training and testing sets
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=vocab_size)
We set num_words to 5000, which means we will only use the top 5000 most frequent
words in the dataset.
Next, we will preprocess the data by padding the sequences to a fixed length and truncating
sequences that are longer than the fixed length.
# Pad the sequences to ensure that they all have the same length
X_train = pad_sequences(X_train, maxlen=maxlen)
X_test = pad_sequences(X_test, maxlen=maxlen)
The pad_sequences function pads the sequences with zeros to ensure they are all the same
length. Here, we set maxlen to 100, which means that all sequences will be truncated or
padded to 100 words.
# Define the embedding dimension
embedding_dim = 50
Procedure :
To implement a Recurrent Neural Network (RNN) for the IMDB movie review classification
problem, we will use the Keras deep learning library, which provides a simple and intuitive
interface for building and training neural networks.
The IMDB movie review classification problem is a binary classification task, where the goal
is to classify movie reviews as either positive or negative. The dataset contains 50,000
movie reviews, split into 25,000 for training and 25,000 for testing. Each review is a
sequence of words, and the task is to predict whether the overall sentiment of the review is
positive or negative.
To build our RNN, we will use an architecture called Long Short-Term Memory (LSTM),
which is a type of RNN that is particularly good at processing sequential data. The basic idea
behind LSTMs is to allow the network to selectively remember or forget information from
previous time steps, which makes them well-suited for tasks like natural language
processing.
Program:
We load the IMDB dataset using the imdb.load_data() function, which returns
the training and testing data as a tuple of (x_train, y_train) and (x_test,
y_test), where x_train and x_test are arrays of sequences, and y_train and
y_test are the corresponding labels.
We compile the model using the compile() function, specifying the loss
function, optimizer, and metrics to use during training.
Once the model is trained, the next step is to evaluate its performance on the test data. We
will use the evaluate method in Keras to evaluate the model.
Output:
Epoch 1/5
391/391 [==============================] - 90s 224ms/step - loss: 0.4158 -
accuracy: 0.8017 - val_loss: 0.2988 - val_accuracy: 0.8741
Epoch 2/5
391/391 [==============================] - 85s 216ms/step - loss: 0.2428 -
accuracy: 0.9062 - val_loss: 0.3196 - val_accuracy: 0.8702
Epoch 3/5
391/391 [==============================] - 85s 217ms/step - loss: 0.1863 -
accuracy: 0.9306 - val_loss: 0.3453 - val_accuracy: 0.8642
Epoch 4/5
391/391 [==============================] - 86s 220ms/step - loss: 0.1401 -
accuracy: 0.9495 - val_loss: 0.3542 - val_accuracy: 0.8557
Epoch 5/5
391/391 [==============================] - 86s 220ms/step - loss: 0.1171 -
accuracy: 0.9586 - val_loss: 0.3992 - val_accuracy: 0.8589
782/782 [==============================] - 27s 34ms/step - loss: 0.3992 - accuracy:
0.8589
Test accuracy: 0.8588799834251404