backpropagation algorithm_6th semester
backpropagation algorithm_6th semester
we can see we are working with the neuron j weight from the previous
hidden layer connecting to the current neuron i.
We’re multiplying the learning rate α by the incoming activation from
the j neuron.
This input is calculated by getting the net input to the j neuron and
then computing the activation of neuron j.
Computing
. the total weight input to the activation function for neuron
i,
we compute the dot product of the incoming weight vector Wj and the
activation vector Aj and then add in the bias term value
Error term
This update rule is similar to how we’d update a perceptron except we’re using the activations of the previous layers as
opposed to their raw input values.
This rule also contains a term for the derivative of the activation function to get the gradient of the activation function
Updating the hidden layers With the backpropagation
algorithm, we walk back across the hidden layers, updating the
connection between each one until we reach the input layer.
To update these connections we take the input from the
fractional error value computed previously and multiply it by
the activation (input) from the connection from the previous
layer and the learning rate.
The length of these learning steps, or the amount the weights that are changed with each iteration, is
known as the learning rate.