Back Propagation
Back Propagation
BACK PROPAGATION
BACKGROUND
• Backpropagation was invented by Bryson and Ho in 1969.
• It is a method for supervised training in multi-layer networks.
• The Backpropagation algorithm is a sensible approach for dividing the contribution of each
weight.
• Works basically the same as perceptrons.
• It has 2 Learning Principles: Hidden Layers and Gradients
• There are two differences for the updating rule :
1) The activation of the hidden unit is used instead of activation of the input value.
2) The rule contains a term for the gradient of the activation function
• Back propagation works by approximating the non-linear relationship between
the input and the output by adjusting the weight values internally.
• It can further be generalized for the input that is not included in the training
patterns (predictive abilities).
• It is a generalization of the delta rule for non-linear activation functions and
multi-layer networks.
• The back propagation network has two stages, training and testing.
• During the training phase, the network is "shown" sample inputs and the correct
classifications. For example, the input might be an encoded picture of a face, and
the output could be represented by a code that corresponds to the name of the
person.
• The scheme will define the network architecture so that once a network is trained,
the scheme cannot be changed without creating a totally new net.
BACK PROPAGATION ARCHITECTURE
• A back propagation network consists of at least three layers of units:
An input layer,
At least one intermediate hidden layer, and
An output layer
• Typically units are connected in a feed-forward fashion with input units fully connected to units in the
hidden layer and hidden units fully connected to units in the output layer.
• When a back propagation network is cycled, an input pattern is propagated forward to the output units
through the intervening input-to-hidden and hidden-to-output weights.
• The output of a back propagation network is interpolated as a classification decision.
• Back propagation neural networks can have more than one hidden layer.
BACK PROPAGATION NETWORK TRAINING
• Ideally, the error function should have a value of zero when the neural network
has been correctly trained. This, however, is numerically unrealistic.
LIMITATION OF BACK PROPAGATION
• It cannnot use Perceptron Learning Rule because no teacher values are
possible for hidden units.
THE END.