5. The Backwar ds Pass
Update the weights using gradient
descent.
So that they cause the actual output
to be closer the target output.
Thereby minimizing the error for each
output neuron and the network as a
whole.
10. Update the weight.
To decrease the error, we then subtract this value from the current weight.
= learning rate (eta)
11. We perform the actual
updates in the neural
network after we have the
new weights leading into
the hidden layer neurons
(ie, we use the original
weights, not the updated
weights, when we continue
the backpropagation
algorithm below).
16. Finally, we’ve updated all of our weights!
When we fed forward the 0.05 and 0.1 inputs originally, the error on the
network was 0.298371109.
After this first round of backpropagation, the total error is now down to
0.291027924.