0% found this document useful (0 votes)
25 views

Forward & Backward Propagation

Forward propagation in a neural network involves passing input data through layers to compute predictions, applying linear transformations and activation functions. The activation function introduces non-linearity, enabling the network to model complex relationships. Backpropagation calculates the loss, propagates errors backward using the chain rule, and updates weights and biases to optimize the network's performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Forward & Backward Propagation

Forward propagation in a neural network involves passing input data through layers to compute predictions, applying linear transformations and activation functions. The activation function introduces non-linearity, enabling the network to model complex relationships. Backpropagation calculates the loss, propagates errors backward using the chain rule, and updates weights and biases to optimize the network's performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Forward & Backward Propagation

1.Explain the concept of forward propagation in a neural network.

Forward propagation is the process of passing input data through the layers of a neural network
to compute the output (predictions). Each layer applies a linear transformation (weighted sum)
to the input, followed by an activation function. The process starts from the input layer and
propagates through the hidden layers to the output layer, where the final prediction is made.

2.What is the purpose of the activation function in forward propagation.

The activation function introduces non-linearity into the network, allowing it to model complex
relationships in the data. Without activation functions, the entire network would behave like a
single linear transformation, regardless of the number of layers, limiting its expressive power.

3.Describe the steps involved in the backward propagation (backpropagation) algorithm.

Backpropagation involves the following steps:


• Compute Loss: Calculate the diFerence between the predicted output and the actual
target value using a loss function.
• Propagate Error Backwards: Using the chain rule, calculate the gradients of the loss with
respect to each weight and bias in the network. This involves:
o Calculating the gradient of the loss with respect to the output (output layer error).
o Backpropagating this error through each layer to compute gradients for weights
and biases.
• Update Weights and Biases: Use the calculated gradients to update the weights and
biases, typically using an optimization algorithm like gradient descent.

4.What is the purpose of the chain rule in backpropagation.

The chain rule allows the gradients of the loss function to be eFiciently computed with respect to
the weights and biases of the network. It ensures that the error from the output layer is properly
propagated backward through each layer by taking into account the contribution of each layer to
the final error.

5.Implement the forward propagation process for a simple neural network with one hidden
layer using NumPy:

import numpy as np

# Define activation function (ReLU for example)


def relu(x):
return np.maximum(0, x)

# Forward propagation implementation


def forward_propagation(X, W1, b1, W2, b2):
# Input to hidden layer
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
# Hidden layer to output layer
Z2 = np.dot(W2, A1) + b2
A2 = Z2 # Assuming linear activation for output layer

return A1, A2

# Example usage
X = np.array([[0.5], [0.2]]) # Input features
W1 = np.array([[0.1, 0.2], [0.3, 0.4]]) # Weights for hidden layer
b1 = np.array([[0.01], [0.02]]) # Biases for hidden layer
W2 = np.array([[0.5, 0.6]]) # Weights for output layer
b2 = np.array([[0.03]]) # Bias for output layer

A1, A2 = forward_propagation(X, W1, b1, W2, b2)


print("Hidden layer output:", A1)
print("Output layer output:", A2)

You might also like