0% found this document useful (0 votes)
5 views

p5

The document outlines a practical assignment to create a simple 3-layer neural network in Python to implement the XOR binary function. It includes specifications for the network architecture, activation function, loss function, and training process using backpropagation without external libraries except for numpy. The final output includes the weights, biases, and predicted outputs for all possible XOR inputs after training the network.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

p5

The document outlines a practical assignment to create a simple 3-layer neural network in Python to implement the XOR binary function. It includes specifications for the network architecture, activation function, loss function, and training process using backpropagation without external libraries except for numpy. The final output includes the weights, biases, and predicted outputs for all possible XOR inputs after training the network.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Artificial Intelligence A.Y.

2024-2025 Practical-5

Practical-5
Aim: Write a python program to create a simple 3-layer neural network for implementation of binary
function.
 3 Layers: Input Layer, 1 Hidden Layer, 1 Output Layer
 Create neural_network class
 For input of Boolean function: use two input and 3 neurons in hidden layer
 Boolean Function: XOR
 Use Sigmoid function as activation function in all neurons
 Choose appropriate error function as Loss Function and compare
 Initialize weight and bias randomly in range of (-1, 1)
 Use Backpropagation algorithm to train neural network
 Don’t use any python package except numpy
 Print all parameters (Weight and bias) after training
 Print output of neural network after training for all possible inputs

Code:

import numpy as np

# Define the neural network class

class NeuralNetwork:

def __init__(self):

# Initialize the size of each layer

self.input_size = 2

self.hidden_size = 3

self.output_size = 1

# Randomly initialize weights and biases in the range (-1, 1)

self.W_in_hid = np.random.uniform(-1, 1, (self.input_size, self.hidden_size))

self.b_hid = np.random.uniform(-1, 1, (1, self.hidden_size))

self.W_hid_out = np.random.uniform(-1, 1, (self.hidden_size, self.output_size))

self.b_out = np.random.uniform(-1, 1, (1, self.output_size))

def sigmoid(self, x):


Name: Krishna Chitlangia
Enrollment No: 22012011016
Batch: 6CE-D-2 Page | 1
Artificial Intelligence A.Y. 2024-2025 Practical-5

# Sigmoid activation function

return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):

# Derivative of the sigmoid function

return x * (1 - x)

def forward(self, X):

# Forward pass

self.hin = np.dot(X, self.W_in_hid) + self.b_hid

self.hout = self.sigmoid(self.hin)

self.oin = np.dot(self.hout, self.W_hid_out) + self.b_out

self.oout = self.sigmoid(self.oin)

return self.oout

def loss_function(self, y_true, y_pred):

# Mean Squared Error loss function

return np.mean((y_true - y_pred) ** 2)

def loss_derivative(self, y_true, y_pred):

# Derivative of the Mean Squared Error loss function

return (y_pred - y_true) / y_true.shape[0]

def train(self, X, y, learning_rate, epochs):

for epoch in range(epochs):

# Forward pass

y_pred = self.forward(X)

# Calculate loss

loss = self.loss_function(y, y_pred)

# Backpropagation

d_loss_out = self.loss_derivative(y, y_pred) * self.sigmoid_derivative(y_pred)

d_loss_hid = np.dot(d_loss_out, self.W_hid_out.T) * self.sigmoid_derivative(self.hout)


Name: Krishna Chitlangia
Enrollment No: 22012011016
Batch: 6CE-D-2 Page | 2
Artificial Intelligence A.Y. 2024-2025 Practical-5

# Update weights and biases using the calculated gradients

self.W_hid_out -= learning_rate * np.dot(self.hout.T, d_loss_out)

self.b_out -= learning_rate * np.sum(d_loss_out, axis=0, keepdims=True)

self.W_in_hid -= learning_rate * np.dot(X.T, d_loss_hid)

self.b_hid -= learning_rate * np.sum(d_loss_hid, axis=0, keepdims=True)

# Print loss at intervals

if (epoch + 1) % 1000 == 0 or epoch == 0:

print(f"Epoch {epoch + 1}, Loss: {loss:.6f}")

def predict(self, X):

# Predict the output for given input

return self.forward(X)

# Define the input data and corresponding XOR output

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

y = np.array([[0], [1], [1], [0]])

# Create an instance of the neural network

nn = NeuralNetwork()

# Set learning rate and number of epochs

learning_rate = 0.9

epochs = 10000

# Train the neural network

nn.train(X, y, learning_rate, epochs)

print("\nFinal Weights and Biases:")

print("Weights (input to hidden):\n", nn.W_in_hid)

print("Biases (hidden):\n", nn.b_hid)

print("Weights (hidden to output):\n", nn.W_hid_out)

print("Biases (output):\n", nn.b_out)

# Test the trained neural network


Name: Krishna Chitlangia
Enrollment No: 22012011016
Batch: 6CE-D-2 Page | 3
Artificial Intelligence A.Y. 2024-2025 Practical-5

print("\nPredicted Outputs for XOR inputs:")

for i in range(len(X)):

prediction = nn.predict(X[i].reshape(1, -1))[0][0]

print(f"Input: {X[i]}, Predicted Output: {prediction:.4f}")

Output:

Name: Krishna Chitlangia


Enrollment No: 22012011016
Batch: 6CE-D-2 Page | 4

You might also like