CS878 - Lab 1
CS878 - Lab 1
Steps:
Import required Libraries:
Import the necessary libraries required as NumPy for mathematical calculations, matplotlib
for plotting the results, and Keras for training the model.
Load the dataset:
Load the dataset from MNIST which contains handwritten digits along with their labels.
Preprocess Dataset:
Reshape the dataset to a size of 28*28 and convert labels to categorical using one hot
encoding (a technique to represent categorical data into binary data with all zeroes except
the class present there).
Define the Model:
Define a sequential model (used to construct a simple neural network layer by layer) using
the keras library with 128 units in the first layer and 10 units in second layer depending upon
the classes present in the dataset.
ReLU is used as the first activation function (mathematical operation to introduce non-
linearity in model to learn complex things) and softmax as second activation function.
Compile the model:
Define the loss function (a measure of model’s performance quantifying how the model is
performing by comparing predictions with targets), matrices of performance measures
during the compilation of the model and Adam optimizer (optimizers are used to adjust the
loss function to get the optimal set of parameters that lead to best performance of
architecture).
Train the model:
Train the model on the training dataset for 10 epochs (one complete pass through the entire
data during training phase) and use validation data to monitor the model performance.
Plot the results:
Use Matplotlib to plot the results (performance measures).
Evaluate the model:
Evaluate the trained model on test data to check accuracy and loss.
Code Template:
# Import necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Preprocess the data
X_train = X_train.reshape((X_train.shape[0], 28 * 28)).astype('float32') / 255
X_test = X_test.reshape((X_test.shape[0], 28 * 28)).astype('float32') / 255
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Define the model
model = Sequential([
Dense(128, activation='relu', input_shape=(28 * 28,)),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model and store the training history
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))
# Plot the training and validation accuracy
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test)
print('Test accuracy:', test_acc)
TASK: Develop a multi-layer neural network model for classification of IRIS data set.
IRIS Dataset
The IRIS dataset is a famous dataset in the field of machine learning and statistics. It is often
used for classification tasks, particularly for practicing and demonstrating various algorithms
and techniques. The dataset consists of 150 samples of iris flowers, each with four features
measured: sepal length, sepal width, petal length, and petal width. These features are used
to classify each iris flower into one of three species: setosa, versicolor, or virginica.
Here's a breakdown of the dataset:
Features:
Sepal length (in centimeters)
Sepal width (in centimeters)
Petal length (in centimeters)
Petal width (in centimeters)
Target Variable:
Species: Setosa, Versicolor, Virginica