0% found this document useful (0 votes)
3 views

AIML lab ex 2

The document provides a step-by-step guide to implementing a simple feedforward neural network using TensorFlow/Keras, training it on the MNIST dataset. It demonstrates the concepts of overfitting and underfitting by training a complex model on a small dataset and then modifies the architecture to include L2 regularization and dropout to improve generalization. Finally, it compares the performance of the models using accuracy and loss curves.

Uploaded by

darkside9547
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AIML lab ex 2

The document provides a step-by-step guide to implementing a simple feedforward neural network using TensorFlow/Keras, training it on the MNIST dataset. It demonstrates the concepts of overfitting and underfitting by training a complex model on a small dataset and then modifies the architecture to include L2 regularization and dropout to improve generalization. Finally, it compares the performance of the models using accuracy and loss curves.

Uploaded by

darkside9547
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1.

Implement a simple neural network using backpropagation in Python using


TensorFlow/Keras.

Question:

Write a Python program to implement a simple feedforward neural network with


backpropagation using TensorFlow/Keras. Train it on the MNIST dataset.

Answer:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import numpy as np

# Load MNIST dataset


(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0

# Define a simple neural network

model = keras.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(128, activation='relu'),
layers.Dense(10, activation='softmax')
])

# Compile the model


model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

# Train the model


model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))

# Evaluate on test set


test_loss, test_acc = model.evaluate(x_test, y_test)
print(f"Test Accuracy: {test_acc}")

2. Demonstrate overfitting and underfitting in a neural network.

Question:

Train a neural network on a small dataset and demonstrate the effects of overfitting and
underfitting. Modify the architecture to reduce overfitting.

Answer:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

# Generate a simple dataset (small dataset to induce overfitting)


(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()

# Normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0

# Overfitting Model (Too Complex)


overfit_model = keras.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(512, activation='relu'),
layers.Dense(512, activation='relu'),
layers.Dense(10, activation='softmax')
])

overfit_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

overfit_model.fit(x_train[:5000], y_train[:5000], epochs=20, validation_data=(x_test,


y_test))
3. Apply L2 Regularization (Ridge) and Dropout to prevent overfitting.

Question:

Modify the overfitting model to include L2 regularization and dropout to improve


generalization.

Answer:

# Improved Model with L2 Regularization and Dropout


regularized_model = keras.Sequential([
layers.Flatten(input_shape=(28, 28)),
layers.Dense(512, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)),
layers.Dropout(0.5), # Drop 50% neurons to prevent overfitting
layers.Dense(256, activation='relu', kernel_regularizer=keras.regularizers.l2(0.001)),
layers.Dropout(0.5),
layers.Dense(10, activation='softmax')
])

regularized_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

regularized_model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

4. Compare model performance with and without regularization using accuracy and
loss curves.

Question:

Plot accuracy and loss graphs for both overfitting and regularized models. Observe the
differences.
Answer:

import matplotlib.pyplot as plt

history_overfit = overfit_model.fit(x_train, y_train, epochs=10, validation_data=(x_test,


y_test), verbose=0)
history_regularized = regularized_model.fit(x_train, y_train, epochs=10,
validation_data=(x_test, y_test), verbose=0)

# Plot accuracy comparison


plt.plot(history_overfit.history['val_accuracy'], label="Overfit Model")
plt.plot(history_regularized.history['val_accuracy'], label="Regularized Model")
plt.xlabel("Epochs")
plt.ylabel("Validation Accuracy")
plt.legend()
plt.show()

# Plot loss comparison


plt.plot(history_overfit.history['val_loss'], label="Overfit Model")
plt.plot(history_regularized.history['val_loss'], label="Regularized Model")
plt.xlabel("Epochs")
plt.ylabel("Validation Loss")
plt.legend()
plt.show()

You might also like