0% found this document useful (0 votes)
3 views

Backpropagation Neural Network

The document discusses the fundamentals of neural networks, specifically multilayer perceptrons (MLPs) and the backpropagation algorithm used for supervised learning. It outlines the notations for biases, outputs, and weights, and describes a proposed algorithm for training an MLP using sample data. Key concepts include the loss function, updating parameters, and the application of the chain rule in backpropagation.

Uploaded by

rabby01601565625
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Backpropagation Neural Network

The document discusses the fundamentals of neural networks, specifically multilayer perceptrons (MLPs) and the backpropagation algorithm used for supervised learning. It outlines the notations for biases, outputs, and weights, and describes a proposed algorithm for training an MLP using sample data. Key concepts include the loss function, updating parameters, and the application of the chain rule in backpropagation.

Uploaded by

rabby01601565625
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

NEURAL NETWORK SPS, SUMMER 2022

NEURAL NETWORK
(MULTILAYER PERCEPTRON)
MLP NOTATIONS
Bias : bij Where i →layer, j → neuron number
Output : oij Where i →layer, j → neuron number
Weights : wkij Where
k → layer number
i → Previous layer node number
j → node number where it is going
BACKPROPAGATION
Backpropagation, short for "backward propagation
of errors," is an algorithm for supervised learning of
artificial neural networks using gradient descent.
Given an artificial neural network and an
error function, the method calculates the gradient of
the error function with respect to the neural
network's weights.
SAMPLE DATA AND MLP
Roo Size Price Linear Regression with no
m (Sq. (in Activation function
feet) Crore
)
8 10 8

5 6 5

9 7 7

3 5 4
PROPOSED ALGORITHM
Step 1: Choose initial values of W’s and b’s (Usually w=1 and b=0)
Step 2: select a row of data (usually random)
Step 3: Predict ŷ using forward propagation (Dot Product of w,b)
Step 4: Choose a loss function (here we use (y- ŷ)2 )
Step 5: Update Weight and bias of each layers trainable parameter
wnew = wold – lr*(∂L/∂ wold)
bnew = bold – lr*(∂L/∂ bold)
KNOWING THE BASICS
There are total 9 trainable
parameters.
Loss = (y- ŷ)2
 y is fixed. So we can change only ŷ
After calculating the loss we have
to update all the trainable
parameters.
To update we also need derivatives
of loss respect to the parameters.
 We apply chain rules for
backpropagation.
KNOWING THE BASICS
CHAIN RULE
In differential calculus, the chain rule is a formula
used to find the derivative of a composite
function.
As example:

You might also like