0% found this document useful (0 votes)
28 views

CS878 - Lab 1

This document provides instructions for developing a multi-layer neural network model to classify the iris flower data set. It first describes the iris data set, which contains 150 samples of iris flowers characterized by 4 features - sepal length, sepal width, petal length, petal width - belonging to 3 species. It then outlines the steps to load and preprocess the data, including one-hot encoding the targets, splitting into train and test sets, and standardizing features. Finally, it lists the tasks of building a simple neural network model, compiling it, training it on the preprocessed data, and evaluating it on the test set.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

CS878 - Lab 1

This document provides instructions for developing a multi-layer neural network model to classify the iris flower data set. It first describes the iris data set, which contains 150 samples of iris flowers characterized by 4 features - sepal length, sepal width, petal length, petal width - belonging to 3 species. It then outlines the steps to load and preprocess the data, including one-hot encoding the targets, splitting into train and test sets, and standardizing features. Finally, it lists the tasks of building a simple neural network model, compiling it, training it on the preprocessed data, and evaluating it on the test set.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Lab Session 1: Introduction to Neural Networks

Data Set: MNIST Handwritten Digits


Libraries used:
NumPy:
NumPy is a powerful library for numerical computations in Python, providing support for
large, multi-dimensional arrays and matrices.
Matplotlib:
This library is used for visualization, plotting static, animated plots, and charts.
Keras:
Keras is used as a machine learning framework. It is used for defining, training, and
deploying neural networks.

Steps:
Import required Libraries:
Import the necessary libraries required as NumPy for mathematical calculations, matplotlib
for plotting the results, and Keras for training the model.
Load the dataset:
Load the dataset from MNIST which contains handwritten digits along with their labels.
Preprocess Dataset:
Reshape the dataset to a size of 28*28 and convert labels to categorical using one hot
encoding (a technique to represent categorical data into binary data with all zeroes except
the class present there).
Define the Model:
Define a sequential model (used to construct a simple neural network layer by layer) using
the keras library with 128 units in the first layer and 10 units in second layer depending upon
the classes present in the dataset.
ReLU is used as the first activation function (mathematical operation to introduce non-
linearity in model to learn complex things) and softmax as second activation function.
Compile the model:
Define the loss function (a measure of model’s performance quantifying how the model is
performing by comparing predictions with targets), matrices of performance measures
during the compilation of the model and Adam optimizer (optimizers are used to adjust the
loss function to get the optimal set of parameters that lead to best performance of
architecture).
Train the model:
Train the model on the training dataset for 10 epochs (one complete pass through the entire
data during training phase) and use validation data to monitor the model performance.
Plot the results:
Use Matplotlib to plot the results (performance measures).
Evaluate the model:
Evaluate the trained model on test data to check accuracy and loss.

Code Template:
# Import necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Preprocess the data
X_train = X_train.reshape((X_train.shape[0], 28 * 28)).astype('float32') / 255
X_test = X_test.reshape((X_test.shape[0], 28 * 28)).astype('float32') / 255
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Define the model
model = Sequential([
Dense(128, activation='relu', input_shape=(28 * 28,)),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Train the model and store the training history
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))
# Plot the training and validation accuracy
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# Evaluate the model
test_loss, test_acc = model.evaluate(X_test, y_test)
print('Test accuracy:', test_acc)
TASK: Develop a multi-layer neural network model for classification of IRIS data set.

IRIS Dataset
The IRIS dataset is a famous dataset in the field of machine learning and statistics. It is often
used for classification tasks, particularly for practicing and demonstrating various algorithms
and techniques. The dataset consists of 150 samples of iris flowers, each with four features
measured: sepal length, sepal width, petal length, and petal width. These features are used
to classify each iris flower into one of three species: setosa, versicolor, or virginica.
Here's a breakdown of the dataset:
Features:
Sepal length (in centimeters)
Sepal width (in centimeters)
Petal length (in centimeters)
Petal width (in centimeters)
Target Variable:
Species: Setosa, Versicolor, Virginica

# Import necessary libraries


import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import to_categorical
# Load the Iris dataset
iris = load_iris()
X = iris.data
# One-hot encode the target labels
y = to_categorical(iris.target)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Standardize the feature values
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Build a simple neural network model


# Compile the model
# Train the model
# Evaluate the model on the test set

You might also like