0% found this document useful (0 votes)
24 views39 pages

NNDL Lab Manual

The document is a lab record for a course on Neural Networks and Deep Learning, detailing various practical exercises conducted by students. It includes a bonafide certificate, an index of programs, and specific implementations of neural network models using TensorFlow and Keras, such as matrix operations, perceptrons, feedforward networks, regression models, and CNNs. Each exercise outlines the aim, algorithm, program code, and results of the implementations.

Uploaded by

mahalaxcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views39 pages

NNDL Lab Manual

The document is a lab record for a course on Neural Networks and Deep Learning, detailing various practical exercises conducted by students. It includes a bonafide certificate, an index of programs, and specific implementations of neural network models using TensorFlow and Keras, such as matrix operations, perceptrons, feedforward networks, regression models, and CNNs. Each exercise outlines the aim, algorithm, program code, and results of the implementations.

Uploaded by

mahalaxcse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CCS 355 – NEURAL NETWORKS AND DEEP LEARNING

LAB RECORD
Bonafide Certificate

Certified that this is a Bonafide Record of practical work


done by Mr./Ms.______________________________ of
the_________Year/Semester in _________________
__________________________________ department of this
College, in the _______________________________________
Laboratory during the academic year _____________

Staff – in- Charge HOD

Register Number _____________________


End Exam held on ____________________

Internal Examiner External Examiner

INDEX
S.N PROGRAM P.NO MARK SIGN
O
1 Implement simple matrix operations in
tensorflow
2 Implement a perceptron in tensorflow/keras
environment
3 Implement a feedforward network in
tensorflow/keras
4 Implement a regression model in keras

5 Implement an image classifier using CNN in


tensorflow/keras
6 Improve the deep learning model by fine
tuning hyperparameters
7 Implement transfer learning concept in
image classification
8 Using a pretrained model on keras for
transfer learning
9 Perform sentiment analysis using RNN

10 Implement an LSTM based autoencoder in


tensorflow/keras
11 Image generation using GAN
EX.NO. 1: IMPLEMENT SIMPLE MATRIX OPERATIONS IN TENSORFLOW
AIM
To implement matrix operations using tensorflow.
ALGORITHM
Step 1: Import Required Modules
Step 2: Declare matrices, scalar and vector
Step 3: Find the dimension of scalar, vector and matrices.
Step 4: Perform required operation using arithmetic operators.
Step 5: Display the output.
PROGRAM
import tensorflow as tf
scalar=tf.constant(7)
scalar
scalar.ndim
vector=tf.constant([10,0])
vector.ndim
matrix=tf.constant([[1,2],[3,4]])
print(matrix)
print(“The number of dimensions of matrix is:”+str(matrix.ndim))
matrix=tf.constant([[1,2],[3,4]])
matrix1=tf.constant([[2,4],[6,8]])
print(matrix+matrix1)
print(matrix1-matrix)
print(matrix1*matrix)
print(matrix1/matrix)
matrix=tf.constant([[1,3],[5,7]])
([[2, 4],[6,8]])
print(matrix)
print(tf.transpose(matrix))
print(“Dot product of matrices is:”+str(tf.tensordot(matrix,matrix,axes=1))
Output
Original matrices
[[1. 3.]
[5. 7.]]
[[2. 4.]
[6. 8.]]
Matrix Multiplication Result:
[[20. 28.]
[52. 76.]]
RESULT
Thus, the matrix operations are successfully executed in tensorflow.
EX.NO. 2: IMPLEMENT A PERCEPTRON IN TENSORFLOW / KERAS
ENVIRONMENT
AIM
To implement multiple layer perceptron in Jupyter notebook using tensorflow environment.
ALGORITHM
Step 1: Import Required Modules and Load Dataset
Step 2: Load and Normalize Image Data
Step 3: Visualize Data using matplotlib
Step 4: Build the Neural Network Model
Step 5: Compile the Model
Step 6: Train the Model
Step 7: Evaluate the Model
PROGRAM
# Importing necessary modules
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense
# Load MNIST dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Normalize image pixel values by dividing by 255 (grayscale)
gray_scale = 255
x_train = x_train.astype('float32') / gray_scale
x_test = x_test.astype('float32') / gray_scale
# Checking the shape of feature and target matrices
print("Feature matrix (x_train):", x_train.shape)
print("Target matrix (y_train):", y_train.shape)
print("Feature matrix (x_test):", x_test.shape)
print("Target matrix (y_test):", y_test.shape)
Output
Feature matrix (x_train): (60000, 28, 28)
Target matrix (x_test): (10000, 28, 28)
Feature matrix (y_train): (60000,)
Target matrix (y_test): (10000,)
# Visualizing 100 images from the training data
fig, ax = plt.subplots(10, 10)
k=0
for i in range(10):
for j in range(10):
ax[i][j].imshow(x_train[k].reshape(28, 28), aspect='auto')
k += 1
plt.show()
Output

# Building the Sequential neural network model


model = Sequential([
# Flatten input from 28x28 images to 784 (28*28) vector
Flatten(input_shape=(28, 28)),
# Dense layer 1 (256 neurons)
Dense(256, activation='sigmoid'),
# Dense layer 2 (128 neurons)
Dense(128, activation='sigmoid'),
# Output layer (10 classes)
Dense(10, activation='sigmoid'),
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10,
batch_size=2000,
validation_split=0.2)
Output
Epoch 1/10
24/24 ━━━━━━━━━━━━━━━━━━━━ 5s 141ms/step - accuracy: 0.2382 - loss: 2.2627 -
val_accuracy: 0.6237 - val_loss: 1.7715
Epoch 2/10
24/24 ━━━━━━━━━━━━━━━━━━━━ 3s 55ms/step - accuracy: 0.6948 - loss: 1.6032 -
val_accuracy: 0.8134 - val_loss: 1.0656
...
Epoch 9/10
24/24 ━━━━━━━━━━━━━━━━━━━━ 2s 54ms/step - accuracy: 0.9194 - loss: 0.3018 -
val_accuracy: 0.9250 - val_loss: 0.2774
Epoch 10/10
24/24 ━━━━━━━━━━━━━━━━━━━━ 1s 57ms/step - accuracy: 0.9234 - loss: 0.2805 -
val_accuracy: 0.9285 - val_loss: 0.2623
# Evaluating the model on test data
results = model.evaluate(x_test, y_test, verbose=0)
print('Test loss, Test accuracy:', results)
Output
Test loss, Test accuracy: [0.2682029604911804, 0.9257000088691711]
RESULT
The code is executed and the outputs are recorded to build multiple layer perceptron.

EX.NO. 3: IMPLEMENT A FEED FORWARD NEURAL NETWORK IN


TENSORFLOW/KERAS
AIM
To implement single layer feedforward neural network IN tensorflow.
ALGORITHM
Step 1: Import Required Modules and Load Dataset
Step 4: Build the Neural Network Model
Step 5: Compile the Model
Step 6: Train the Model
Step 7: Evaluate the Model
PROGRAM
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
# Load and prepare the MNIST dataset
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Build the model
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer=Adam(),
loss=SparseCategoricalCrossentropy(),
metrics=[SparseCategoricalAccuracy()])
# Train the model
model.fit(x_train, y_train, epochs=5)
# Evaluate the model
test_loss, test_acc = model.evaluate(x_test, y_test)
print(f'\nTest accuracy: {test_acc}')
Output
Test accuracy: 0.9767000079154968
RESULT
The feedforward neural network is constructed using tensorflow and the accuracy of the
model is evaluated.

EX.NO. 4: IMPLEMENT A REGRESSION MODEL IN KERAS


AIM
To implement a regression model using keras in Jupyter.
ALGORITHM
Step 1 - Loading the required libraries and modules.
Step 2 - Loading the data and performing basic data checks.
Step 3 - Creating arrays for the features and the response variable.
Step 4 - Creating the training and test datasets.
Step 5 - Define, compile, and fit the Keras regression model.
Step 6 - Predict on the test data and compute evaluation metrics.
PROGRAM
# Import required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import sklearn
# Import necessary modules
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
# Keras specific
import keras
from keras.models import Sequential
from keras.layers import Dense
df = pd.read_csv('regressionexample.csv')
print(df.shape)
df.describe()
(574, 5)
target_column = ['unemploy']
predictors = list(set(list(df.columns))-set(target_column))
df[predictors] = df[predictors]/df[predictors].max()
df.describe()
X = df[predictors].values
y = df[target_column].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=40)
print(X_train.shape); print(X_test.shape)
(401, 4)
(173, 4)
# Define model
model = Sequential()
model.add(Dense(500, input_dim=4, activation= "relu"))
model.add(Dense(100, activation= "relu"))
model.add(Dense(50, activation= "relu"))
model.add(Dense(1))
#model.summary() #Print model Summary
model.compile(loss= "mean_squared_error" , optimizer="adam",
metrics=["mean_squared_error"])
model.fit(X_train, y_train, epochs=20)
OUTPUT
Epoch 1/20
401/401 [==============================] - 0s 1ms/step - loss: 68136318.3441 -
mean_squared_error: 68136318.3441
Epoch 2/20
401/401 [==============================] - 0s 133us/step - loss: 68101432.0698 -
mean_squared_error: 68101432.0698
Epoch 3/20
401/401 [==============================] - 0s 125us/step - loss: 67985495.1022 -
mean_squared_error: 67985495.1022
Epoch 4/20
401/401 [==============================] - 0s 134us/step - loss: 67665023.0524 -
mean_squared_error: 67665023.0524
Epoch 5/20
401/401 [==============================] - 0s 127us/step - loss: 66899397.2868 -
mean_squared_error: 66899397.2868
Epoch 6/20
401/401 [==============================] - 0s 107us/step - loss: 65355226.3042 -
mean_squared_error: 65355226.3042
Epoch 7/20
401/401 [==============================] - 0s 120us/step - loss: 62432633.3566 -
mean_squared_error: 62432633.3566
Epoch 8/20
401/401 [==============================] - 0s 128us/step - loss: 57537882.0549 -
mean_squared_error: 57537882.0549
Epoch 9/20
401/401 [==============================] - 0s 150us/step - loss: 50086165.6958 -
mean_squared_error: 50086165.6958
Epoch 10/20
401/401 [==============================] - 0s 119us/step - loss: 39984370.9975 -
mean_squared_error: 39984370.9975
Epoch 11/20
401/401 [==============================] - 0s 97us/step - loss: 28126145.2868 -
mean_squared_error: 28126145.2868
Epoch 12/20
401/401 [==============================] - 0s 110us/step - loss: 16095036.0499 -
mean_squared_error: 16095036.0499
Epoch 13/20
401/401 [==============================] - 0s 126us/step - loss: 7629222.0150 -
mean_squared_error: 7629222.0150
Epoch 14/20
401/401 [==============================] - 0s 107us/step - loss: 4147607.1696 -
mean_squared_error: 4147607.1696
Epoch 15/20
401/401 [==============================] - 0s 107us/step - loss: 3668975.7581 -
mean_squared_error: 3668975.7581
Epoch 16/20
401/401 [==============================] - 0s 111us/step - loss: 3646548.0898 -
mean_squared_error: 3646548.0898
Epoch 17/20
401/401 [==============================] - 0s 126us/step - loss: 3563563.1328 -
mean_squared_error: 3563563.1328
Epoch 18/20
401/401 [==============================] - 0s 117us/step - loss: 3533091.9377 -
mean_squared_error: 3533091.9377
Epoch 19/20
401/401 [==============================] - 0s 123us/step - loss: 3496560.1110 -
mean_squared_error: 3496560.1110
Epoch 20/20
401/401 [==============================] - 0s 132us/step - loss: 3467280.0112 -
mean_squared_error: 3467280.0112
pred_train= model.predict(X_train)
print(np.sqrt(mean_squared_error(y_train,pred_train)))
pred= model.predict(X_test)
print(np.sqrt(mean_squared_error(y_test,pred)))
1856.4850642445354
1825.5904063232729
RESULT
The output above shows that the RMSE, which is our evaluation metric, was 1856 thousand
for train data and 1825 thousand for test data. Ideally, the lower the RMSE value, the better
the model performance.
EX.NO. 5: IMPLEMENT AN IMAGE CLASSIFIER USING CNN IN TENSORFLOW
AIM
To implement a simple image classifier using Convolutional Neural Network (CNN) in
tensorflow.
ALGORITHM
Step 1 – Import tensorflow
Step 2 – Download and prepare CIFAR10 dataset
Step 3 – Verify the data
Step 4 – Create the convolutional base
Step 5 – Display the architecture of a model
Step 6 – Add dense layers on the top and display the architecture of a model used
Step 7 – Compile and train the model
Step 8 – Evaluate the model
PROGRAM
# import tensorflow
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
# load dataset
import matplotlib.pyplot as plt
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
Output
Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
170498071/170498071 ━━━━━━━━━━━━━━━━━━━━ 4s 0us/step
#verify the data
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
Observation
# record your observation while executing above code
# create CNN base
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
# display architecture of a model
model.summary()
Output
Model: "sequential"
Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━
━━━━━━━━━━━━━┩
│ conv2d (Conv2D) │ (None, 30, 30, 32) │ 896 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ max_pooling2d (MaxPooling2D) │ (None, 15, 15, 32) │ 0│
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ conv2d_1 (Conv2D) │ (None, 13, 13, 64) │ 18,496 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ max_pooling2d_1 (MaxPooling2D) │ (None, 6, 6, 64) │ 0│
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ conv2d_2 (Conv2D) │ (None, 4, 4, 64) │ 36,928 │
└─────────────────────────────────┴──────────────────
──────┴───────────────┘
Total params: 56,320 (220.00 KB)
Trainable params: 56,320 (220.00 KB)
Non-trainable params: 0 (0.00 B)
# add dense layers on the top
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10))
model.summary()
# display architecture of a model
Model: "sequential"
Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━
━━━━━━━━━━━━━┩
│ conv2d (Conv2D) │ (None, 30, 30, 32) │ 896 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ max_pooling2d (MaxPooling2D) │ (None, 15, 15, 32) │ 0│
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ conv2d_1 (Conv2D) │ (None, 13, 13, 64) │ 18,496 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ max_pooling2d_1 (MaxPooling2D) │ (None, 6, 6, 64) │ 0│
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ conv2d_2 (Conv2D) │ (None, 4, 4, 64) │ 36,928 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ flatten (Flatten) │ (None, 1024) │ 0│
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ dense (Dense) │ (None, 64) │ 65,600 │
├─────────────────────────────────┼──────────────────
──────┼───────────────┤
│ dense_1 (Dense) │ (None, 10) │ 650 │
└─────────────────────────────────┴──────────────────
──────┴───────────────┘
Total params: 122,570 (478.79 KB)
Trainable params: 122,570 (478.79 KB)
Non-trainable params: 0 (0.00 B)
# compile and train the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_images, train_labels, epochs=10,
validation_data=(test_images, test_labels))
Output
Epoch 1/10
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1723778386.322623 135108 service.cc:146] XLA service 0x7f254c005a30
initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
I0000 00:00:1723778386.322659 135108 service.cc:154] StreamExecutor device (0): Tesla
T4, Compute Capability 7.5
I0000 00:00:1723778386.322663 135108 service.cc:154] StreamExecutor device (1): Tesla
T4, Compute Capability 7.5
I0000 00:00:1723778386.322667 135108 service.cc:154] StreamExecutor device (2): Tesla
T4, Compute Capability 7.5
I0000 00:00:1723778386.322670 135108 service.cc:154] StreamExecutor device (3): Tesla
T4, Compute Capability 7.5
70/1563 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - accuracy: 0.1470 - loss: 2.2585
I0000 00:00:1723778388.120198 135108 device_compiler.h:188] Compiled cluster using
XLA! This line is logged at most once for the lifetime of the process.
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 10s 4ms/step - accuracy: 0.3569 - loss: 1.7377 -
val_accuracy: 0.5424 - val_loss: 1.2687
Epoch 2/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.5721 - loss: 1.1966 -
val_accuracy: 0.5967 - val_loss: 1.1490
Epoch 3/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.6389 - loss: 1.0240 -
val_accuracy: 0.6467 - val_loss: 0.9909
Epoch 4/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - accuracy: 0.6764 - loss: 0.9159 -
val_accuracy: 0.6715 - val_loss: 0.9493
Epoch 5/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - accuracy: 0.6989 - loss: 0.8506 -
val_accuracy: 0.6999 - val_loss: 0.8737
Epoch 6/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.7275 - loss: 0.7811 -
val_accuracy: 0.7032 - val_loss: 0.8687
Epoch 7/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.7479 - loss: 0.7160 -
val_accuracy: 0.7088 - val_loss: 0.8571
Epoch 8/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.7650 - loss: 0.6752 -
val_accuracy: 0.7035 - val_loss: 0.8852
Epoch 9/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 2ms/step - accuracy: 0.7780 - loss: 0.6372 -
val_accuracy: 0.7126 - val_loss: 0.8672
Epoch 10/10
1563/1563 ━━━━━━━━━━━━━━━━━━━━ 4s 3ms/step - accuracy: 0.7946 - loss: 0.5862 -
val_accuracy: 0.7163 - val_loss: 0.8550

# evaluate the model


plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
Output
313/313 - 0s - 2ms/step - accuracy: 0.7163 - loss: 0.8550

Output
print(test_acc)
0.7163000106811523
RESULT
Thus, a simple CNN has achieved a test accuracy of over 70% using CNN model image
classification.

EX.NO. 6: IMPROVE DEEP LEARNING MODEL BY FINE TUNING


HYPERPARAMETERS
AIM
To improve deep learning model by fine tuning hyperparameters using GridSearchCV
ALGORITHM
Step 1 – Import necessary packages
Step 2 – Load the dataset
Step 3 – Create hyperparameter grid
Step 4 – Use regression classifier for modelling
Step 5 – Instantiate GridSearchCV object
Step 6 – Fit GridSearchCV object to feature matrix and target variable
Step 7 – Display the tuned parameters and the best score
PROGRAM

# Necessary imports
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
import numpy as np
from sklearn.datasets import make_classification

X, y = make_classification(
n_samples=1000, n_features=20, n_informative=10, n_classes=2, random_state=42)

# Creating the hyperparameter grid


c_space = np.logspace(-5, 8, 15)
param_grid = {'C': c_space}

# Instantiating logistic regression classifier


logreg = LogisticRegression()

# Instantiating the GridSearchCV object


logreg_cv = GridSearchCV(logreg, param_grid, cv=5)
# Assuming X and y are your feature matrix and target variable
# Fit the GridSearchCV object to the data
logreg_cv.fit(X, y)

# Print the tuned parameters and score


print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_))
print("Best score is {}".format(logreg_cv.best_score_))

Output:
Tuned Logistic Regression Parameters: {'C': 0.006105402296585327}
Best score is 0.853
RESULT
The logistic regression parameter is fine tuned and its best score is identified.

EX.NO. 7: IMPLEMENT TRANSFER LEARNING CONCEPT USING IMAGE


CLASSIFICATION
AIM
To implement a image classification using transfer learning concept.
ALGORITHM
Step 1 – Import necessary libraries
Step 2 – Load and pre process the dataset
Step 3 – Use pretrained model such as MobileNetV2
Step 4 – Compile and train the model
Step 5 – Evaluate the model
PROGRAM
# import necessary libraries
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.applications import MobileNetV2
import numpy as np
import matplotlib.pyplot as plt
# Load and pre process the dataset
(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()

# Normalize pixel values to be between 0 and 1


train_images, test_images = train_images / 255.0, test_images / 255.0
# Visualize the data
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
# Use pretrained model
base_model = MobileNetV2(input_shape=(32, 32, 3), include_top=False,
weights='imagenet')
base_model.trainable = False # Freeze the base model

# Add our own classifier on top


model = models.Sequential([
base_model,
layers.GlobalAveragePooling2D(),
layers.Dense(10, activation='softmax')
])
# Compile and train the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

history = model.fit(train_images, train_labels, epochs=10, validation_data=(test_images,


test_labels))
# Evaluate the model
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(f"Test accuracy: {test_acc}")
# Visualize the training process by plotting performance matrix
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label='validation accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0, 1])
plt.legend(loc='lower right')
plt.show()
RESULT
Image is classified using transfer learning and performance metrics are reviewed.

EX. NO. 8: USING A PRETRAINED MODEL ON KERAS FOR TRANSFER


LEARNING
AIM
To use a pre trained model on keras for transfer learning.
ALGORITHM
Step 1 – Load and pre process the data.
Step 2 – Load pre trained model and add custom layers.
Step 3 – Freeze pre trained layers.
Step 4 – Train the custom layers.
Step 5 – Unfreeze some layers for fine tuning
Step 6 – Retrain the model.
Step 7 – Evaluate and visualize the model.
PROGRAM
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
# Load CIFAR-10 dataset
(train_data, train_labels), (val_data, val_labels) = cifar10.load_data()
# Normalize pixel values to [0, 1]
train_data = train_data / 255.0
val_data = val_data / 255.0
# One-hot encode labels
train_labels = to_categorical(train_labels, num_classes=10)
val_labels = to_categorical(val_labels, num_classes=10)
# Build the model using VGG16 as base
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
pretrain_model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pretrained model
for layer in base_model.layers:
layer.trainable = False
# Compile the pretraining model
pretrain_model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Fit the pretraining model
history_pretrain = pretrain_model.fit(train_data, train_labels, epochs=10,
validation_data=(val_data, val_labels))
# Fine-tuning the model
for layer in pretrain_model.layers[-4:]:
layer.trainable = True
# Compile the model with a lower learning rate
pretrain_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
loss='categorical_crossentropy', metrics=['accuracy'])
# Fit the fine-tuning model
history_finetune = pretrain_model.fit(train_data, train_labels, epochs=5,
validation_data=(val_data, val_labels))
# Evaluate the model on validation data
val_loss, val_accuracy = pretrain_model.evaluate(val_data, val_labels, verbose=2)
print(f'Validation Accuracy: {val_accuracy:.4f}')
print(f'Validation Loss: {val_loss:.4f}')
# Plot training & validation accuracy values
plt.figure(figsize=(14, 5))
plt.subplot(1, 2, 1)
plt.plot(history_pretrain.history['accuracy'] + history_finetune.history['accuracy'])
plt.plot(history_pretrain.history['val_accuracy'] + history_finetune.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
# Plot training & validation loss values
plt.subplot(1, 2, 2)
plt.plot(history_pretrain.history['loss'] + history_finetune.history['loss'])
plt.plot(history_pretrain.history['val_loss'] + history_finetune.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
Output
Epoch 1/5
1563/1563 [==============================] - 785s 501ms/step - loss: 1.3160 -
accuracy: 0.5367 - val_loss: 1.2178 - val_accuracy: 0.5719
Epoch 2/5
1563/1563 [==============================] - 772s 494ms/step - loss: 1.1445 -
accuracy: 0.5968 - val_loss: 1.1974 - val_accuracy: 0.5718
Epoch 3/5
1563/1563 [==============================] - 790s 505ms/step - loss: 1.0635 -
accuracy: 0.6238 - val_loss: 1.1318 - val_accuracy: 0.6046
Epoch 4/5
1563/1563 [==============================] - 787s 503ms/step - loss: 0.9995 -
accuracy: 0.6483 - val_loss: 1.1179 - val_accuracy: 0.6110
Epoch 5/5
1563/1563 [==============================] - 789s 505ms/step - loss: 0.9467 -
accuracy: 0.6662 - val_loss: 1.1093 - val_accuracy: 0.6123
Epoch 1/5
1563/1563 [==============================] - 792s 506ms/step - loss: 0.8030 -
accuracy: 0.7197 - val_loss: 1.0506 - val_accuracy: 0.6385
Epoch 2/5
1563/1563 [==============================] - 769s 492ms/step - loss: 0.7705 -
accuracy: 0.7323 - val_loss: 1.0459 - val_accuracy: 0.6373
Epoch 3/5
1563/1563 [==============================] - 790s 505ms/step - loss: 0.7548 -
accuracy: 0.7380 - val_loss: 1.0466 - val_accuracy: 0.6370
Epoch 4/5
1563/1563 [==============================] - 788s 504ms/step - loss: 0.7410 -
accuracy: 0.7438 - val_loss: 1.0460 - val_accuracy: 0.6413
Epoch 5/5
1563/1563 [==============================] - 790s 506ms/step - loss: 0.7280 -
accuracy: 0.7494 - val_loss: 1.0453 - val_accuracy: 0.6412
313/313 - 122s - loss: 1.0453 - accuracy: 0.6412 - 122s/epoch - 390ms/step
Validation Accuracy: 0.6412
Validation Loss: 1.0453
RESULT
The pretrained model is used for transfer modelling and its accuracy is measured.

EX. NO. 9: PERFORM SENTIMENT ANALYSIS USING RNN


AIM
To perform sentiment analysis using Simple RNN or Vanila RNN.
ALGORITHM
Step 1 – Import necessary libraries and dataset.
Step 2 – Get the index values of the words and print the reviews.
Step 3 – Check the range of the reviews.
Step 4 – Create a simple RNN model.
Step 5 – Compile the model.
Step 6- Train the model.
Step 7 – Print the model score on test data.
PROGRAM
#Importing libraries and dataset
from tensorflow.keras.layers import SimpleRNN, Dense, Embedding
from tensorflow.keras.datasets import imdb
from tensorflow.keras.models import Sequential
import numpy as np
# Getting reviews with words that come under 5000
# most occurring words in the entire
# corpus of textual review data
vocab_size = 5000
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=vocab_size)
print(x_train[0])
Output:
[1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66,3941, 4, 173, 36, 256, 5, 25, 100,
43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172,
112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147,
2025, 19, 14, 22,
4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12,
16, 626, 18,
2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 2, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619,
5, 25, 124,
..]
# Getting all the words from word_index dictionary
word_idx = imdb.get_word_index()
# Originally the index number of a value and not a key,
# hence converting the index as key and the words as values
word_idx = {i: word for word, i in word_idx.items()}
# again printing the review
print([word_idx[i] for i in x_train[0]])
Output:
['the', 'as', 'you', 'with', 'out', 'themselves', 'powerful', 'lets', 'loves', 'their', 'becomes', 'reaching',
'had', 'journalist', 'of', 'lot', 'from', 'anyone', 'to', 'have', 'after', 'out', 'atmosphere', 'never', 'more',
'room', 'and', 'it', 'so', 'heart', 'shows', 'to', 'years', 'of', 'every', 'never', 'going', 'and', 'help',
'moments', 'or', 'of', 'every', 'chest', 'visual', 'movie', 'except', 'her', 'was', 'several', 'of', 'enough',
'more', 'with', 'is', 'now', 'current', 'film', 'as', 'you', 'of', 'mine', 'potentially', 'unfortunately', 'of',
'you', 'than', 'him', 'that', 'with', 'out', 'themselves', 'her', 'get', 'for', 'was', 'camp', 'of', 'you',
'movie', 'sometimes', 'movie', 'that', 'with', 'scary', 'but', 'and', 'to', 'story', 'wonderful', 'that', 'in',
'seeing', 'in', 'character', 'to', 'of', '70s', 'and', 'with', 'heart', 'had', 'shadows', 'they', 'of', 'here',
'that', 'with', 'her', 'serious', 'to', 'have', 'does', 'when', 'from', 'why', 'what', 'have', 'critics', 'they',
'is', 'you', 'that', "isn't", 'one', 'will', 'very', 'to', 'as', 'itself', 'with', 'other', 'and', 'in', 'of', 'seen',
'over', 'and', 'for', 'anyone', 'of', 'and', 'br', "show's", 'to', 'whether', 'from', 'than', 'out',
'themselves', 'history', 'he', 'name', 'half', 'some', 'br', 'of', 'and', 'odd', 'was', 'two', 'most', 'of',
'mean', 'for', '1', 'any', 'an', 'boat', 'she', 'he', 'should', 'is', 'thought', 'and', 'but', 'of', 'script', 'you',
'not', 'while', 'history', 'he', 'heart', 'to', 'real', 'at', 'and', 'but', 'when', 'from', 'one', 'bit', 'then',
'have', 'two', 'of', 'script', 'their', 'with', 'her', 'nobody', 'most', 'that', 'with', "wasn't", 'to', 'with',
'armed', 'acting', 'watch', 'an', 'for', 'with', 'and', 'film', 'want', 'an']
# Get the minimum and the maximum length of reviews
print("Max length of a review:: ", len(max((x_train+x_test), key=len)))
print("Min length of a review:: ", len(min((x_train+x_test), key=len)))
Output:
Max length of a review:: 2697
Min length of a review:: 70
from tensorflow.keras.preprocessing import sequence
# Keeping a fixed length of all reviews to max 400 words
max_words = 400
x_train = sequence.pad_sequences(x_train, maxlen=max_words)
x_test = sequence.pad_sequences(x_test, maxlen=max_words)

x_valid, y_valid = x_train[:64], y_train[:64]


x_train_, y_train_ = x_train[64:], y_train[64:]
# fixing every word's embedding size to be 32
embd_len = 32
# Creating a RNN model
RNN_model = Sequential(name="Simple_RNN")
RNN_model.add(Embedding(vocab_size,
embd_len,
input_length=max_words))
# In case of a stacked(more than one layer of RNN)
# use return_sequences=True
RNN_model.add(SimpleRNN(128,
activation='tanh',
return_sequences=False))
RNN_model.add(Dense(1, activation='sigmoid'))
# printing model summary
print(RNN_model.summary())
# Compiling model
RNN_model.compile(
loss="binary_crossentropy",
optimizer='adam',
metrics=['accuracy']
)
# Training the model
history = RNN_model.fit(x_train_, y_train_,
batch_size=64,
epochs=5,
verbose=1,
validation_data=(x_valid, y_valid))

# Printing model score on test data


print()
print("Simple_RNN Score---> ", RNN_model.evaluate(x_test, y_test, verbose=0))
Output:
RESULT
The sentiment analysis is performed using simple RNN (Vanila RNN).

EX.NO. 10: IMPLEMENT AN LSTM BASED AUTOENCODER IN


TENSORFLOW/KERAS
AIM
To implement an LSTM based autoencoder in tensorflow / keras.
ALGORITHM
Step 1 – Import the required libraries
Step 2 – Create a sample of simple sequential data for input
Step 3 – Reshape input into the preferred LSTM data format
Step 4 – Add LSTM encoder – decoder
Step 5 – Convert 1D vector to 2D
Step 6 – Compile the model
Step 7 – Train the model
Step 8 – Reconstruct the input
PROGRAM
from numpy import array
from keras. models import Sequential
from keras.layers import LSTM
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
#define input sequence
sequence = array ([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
#re shape input into [samples, timesteps, features]
n_in = len(sequence)
#i.e. 9
sequence = sequence. reshape ((1, n_in, 1))
model = Sequential()
model.add(LSTM(100, activation='relu', input_shape=(n_in,1)))
model.add(RepeatVector(n_in))
#RepeatVector layer repeats the incoming inputs a specific number of time
model.add(LSTM(100, activation='relu', return_sequences=True))
model.add(TimeDistributed(Dense(1)))
'''#This wrapper allows to apply a layer to every temporal slice of an input.'''
model.compile(optimizer='adam', loss='mse')
#fit model
model.fit(sequence, sequence, epochs=300, verbose=1)
#Reconstruct the input sequence
p = model.predict(sequence, verbose=0)
print(p[0,:,0])
Output
[0.1100185 0.20737442 0.3037837 0.40000474 0.4967959 0.59493166
0.69522375 0.7985466 0.9058684 ]
RESULT
An LSTM based autoencoder is implemented using tensorflow/keras.

EX.NO. 11: IMAGE GENERATION USING GAN


AIM
To generate image using Generative Adversarial Network (GAN)
ALGORITHM
Step 1 – Import the required libraries.
Step 2 – Load the dataset and do pre processing
Step 3 – Build the generator model using CNN
Step 4 – Build the discriminator model using CNN
Step 5 – Compile the model
Step 6 – Train the model and visualize it
Step 7 – Save generated model and display it
PROGRAM
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models, optimizers
import matplotlib.pyplot as plt
# Step 1: Dataset Preparation
# Assuming you have a dataset of images (e.g., MNIST), load and preprocess them
(x_train, _), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape((-1, 28, 28, 1)).astype('float32') / 255.0
def build_generator_cnn():
model = models.Sequential([
# Start with a fully connected layer to interpret the seed
layers.Dense(7*7*128, input_dim=100, activation='relu'),
layers.Reshape((7, 7, 128)), # Reshape into an image format

# Upsample to 14x14
layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding='same',
activation='relu'),
layers.BatchNormalization(),

# Upsample to 28x28
layers.Conv2DTranspose(128, kernel_size=4, strides=2, padding='same',
activation='relu'),
layers.BatchNormalization(),
# Output layer with the shape of the target image, 1 channel for grayscale
layers.Conv2D(1, kernel_size=7, activation='sigmoid', padding='same')
])
return model

def build_discriminator_cnn():
model = models.Sequential([
# Input layer with the shape of the target image
layers.Conv2D(64, kernel_size=3, strides=2, input_shape=(28, 28, 1), padding='same',
activation='relu'),

# Downsample to 14x14
layers.Conv2D(128, kernel_size=3, strides=2, padding='same', activation='relu'),
layers.BatchNormalization(),

# Further downsampling and flattening to feed into a dense output layer


layers.Flatten(),
layers.Dense(1, activation='sigmoid')
])
return model

# Instantiate the CNN-based Generator and Discriminator


generator_cnn = build_generator_cnn()
discriminator_cnn = build_discriminator_cnn()

# Compile the Discriminator


discriminator_cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0002),
loss='binary_crossentropy', metrics=['accuracy'])

# Set the Discriminator's weights to non-trainable (important when we train the combined
GAN model)
discriminator_cnn.trainable = False

# Combined GAN model with CNN


gan_input = layers.Input(shape=(100,))
gan_output = discriminator_cnn(generator_cnn(gan_input))
gan_cnn = models.Model(gan_input, gan_output)
gan_cnn.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0002),
loss='binary_crossentropy')

import matplotlib.pyplot as plt


import numpy as np

# Load and preprocess the MNIST dataset


(x_train, _), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape((-1, 28, 28, 1)).astype('float32') / 255.0

epochs = 10000
batch_size = 64

for epoch in range(epochs):


############################
# 1. Train the Discriminator
############################

# Generate batch of noise


noise = np.random.normal(0, 1, (batch_size, 100))
generated_images = generator_cnn.predict(noise)

# Get a random batch of real images


idx = np.random.randint(0, x_train.shape[0], batch_size)
real_images = x_train[idx]
# Labels for generated and real data
fake_labels = np.zeros((batch_size, 1))
real_labels = np.ones((batch_size, 1))

# Train the Discriminator (real classified as ones and generated as zeros)


d_loss_real = discriminator_cnn.train_on_batch(real_images, real_labels)
d_loss_fake = discriminator_cnn.train_on_batch(generated_images, fake_labels)

#################################
# 2. Train the Generator (via GAN)
#################################

# Train the generator (note that we want the Discriminator to mistake images as real)
noise = np.random.normal(0, 1, (batch_size, 100))
valid_labels = np.ones((batch_size, 1))
g_loss = gan_cnn.train_on_batch(noise, valid_labels)

# Plot the progress


if epoch % 100 == 0:
print(f"Epoch {epoch}: D Loss Real: {d_loss_real[0]}, D Loss Fake: {d_loss_fake[0]},
G Loss: {g_loss}")

# Optionally, save generated images and display


if epoch % 1000 == 0:
generated_image = generator_cnn.predict(np.random.normal(0, 1, (1, 100)))
plt.imshow(generated_image[0, :, :, 0], cmap='gray')
plt.axis('off')
plt.show()
plt.close()
Output:

RESULT
Image generation is executed using Generative Adversarial Model (GAN).

You might also like