Forward & Backward Propagation
Forward & Backward Propagation
Forward propagation is the process of passing input data through the layers of a neural network
to compute the output (predictions). Each layer applies a linear transformation (weighted sum)
to the input, followed by an activation function. The process starts from the input layer and
propagates through the hidden layers to the output layer, where the final prediction is made.
The activation function introduces non-linearity into the network, allowing it to model complex
relationships in the data. Without activation functions, the entire network would behave like a
single linear transformation, regardless of the number of layers, limiting its expressive power.
The chain rule allows the gradients of the loss function to be eFiciently computed with respect to
the weights and biases of the network. It ensures that the error from the output layer is properly
propagated backward through each layer by taking into account the contribution of each layer to
the final error.
5.Implement the forward propagation process for a simple neural network with one hidden
layer using NumPy:
import numpy as np
return A1, A2
# Example usage
X = np.array([[0.5], [0.2]]) # Input features
W1 = np.array([[0.1, 0.2], [0.3, 0.4]]) # Weights for hidden layer
b1 = np.array([[0.01], [0.02]]) # Biases for hidden layer
W2 = np.array([[0.5, 0.6]]) # Weights for output layer
b2 = np.array([[0.03]]) # Bias for output layer