0% found this document useful (0 votes)
3 views

Module-1 Backpropagation Process in Deep Neural Network

Backpropagation is a crucial algorithm used in deep neural networks to optimize weights and biases through gradient descent. It calculates the gradient of the error function for each weight, allowing for efficient training by adjusting weights to minimize error. The process involves a forward pass to compute outputs, followed by a backward pass to update weights based on the error between actual and target outputs.

Uploaded by

thejasurendran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module-1 Backpropagation Process in Deep Neural Network

Backpropagation is a crucial algorithm used in deep neural networks to optimize weights and biases through gradient descent. It calculates the gradient of the error function for each weight, allowing for efficient training by adjusting weights to minimize error. The process involves a forward pass to compute outputs, followed by a backward pass to update weights based on the error between actual and target outputs.

Uploaded by

thejasurendran
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Backpropagation Process in Deep Neural

Network
Backpropagation is one of the important concepts of a neural network. Our task is to classify
our data best. For this, we have to update the weights of parameter and bias, but how can we
do that in a deep neural network? In the linear regression model, we use gradient descent to
optimize the parameter. Similarly here we also use gradient descent algorithm using
Backpropagation Skip 10s

For a single training example, Backpropagation algorithm calculates the gradient of the error
function. Backpropagation can be written as a function of the neural network. Backpropagation
algorithms are a set of methods used to efficiently train artificial neural networks following a
gradient descent approach which exploits the chain rule.

The main features of Backpropagation are the iterative, recursive and efficient method through
which it calculates the updated weight to improve the network until it is not able to perform
the task for which it is being trained. Derivatives of the activation function to be known at
network design time is required to Backpropagation.

Now, how error function is used in Backpropagation and how Backpropagation works? Let
start with an example and do it mathematically to understand how exactly updates the weight
using Backpropagation.

Input values
X1=0.05
X2=0.10
Initial weight
W1=0.15 w5=0.40
W2=0.20 w6=0.45
W3=0.25 w7=0.50
W4=0.30 w8=0.55

Bias Values
b1=0.35 b2=0.60

Target Values
T1=0.01
T2=0.99

Now, we first calculate the values of H1 and H2 by a forward pass.

Forward Pass
To find the value of H1 we first multiply the input value from the weights as

H1=x1×w1+x2×w2+b1
H1=0.05×0.15+0.10×0.20+0.35
H1=0.3775

To calculate the final result of H1, we performed the sigmoid function as

We will calculate the value of H2 in the same way as H1

H2=x1×w3+x2×w4+b1
H2=0.05×0.25+0.10×0.30+0.35
H2=0.3925

To calculate the final result of H1, we performed the sigmoid function as


Now, we calculate the values of y1 and y2 in the same way as we calculate the H1 and H2.

To find the value of y1, we first multiply the input value i.e., the outcome of H1 and H2 from
the weights as

y1=H1×w5+H2×w6+b2
y1=0.593269992×0.40+0.596884378×0.45+0.60
y1=1.10590597

To calculate the final result of y1 we performed the sigmoid function as

We will calculate the value of y2 in the same way as y1

y2=H1×w7+H2×w8+b2
y2=0.593269992×0.50+0.596884378×0.55+0.60
y2=1.2249214

To calculate the final result of H1, we performed the sigmoid function as


Our target values are 0.01 and 0.99. Our y1 and y2 value is not matched with our target
values T1 and T2.

Now, we will find the total error, which is simply the difference between the outputs from
the target outputs. The total error is calculated as

So, the total error is

Now, we will backpropagate this error to update the weights using a backward pass.

How Backpropagation Algorithm Works


The Back propagation algorithm in neural network computes the gradient of the loss function
for a single weight by the chain rule. It efficiently computes one layer at a time, unlike a native
direct computation. It computes the gradient, but it does not define how the gradient is used. It
generalizes the computation in the delta rule.

1. Inputs X, arrive through the preconnected path


2. Input is modeled using real weights W. The weights are usually randomly selected.
3. Calculate the output for every neuron from the input layer, to the hidden layers, to the
output layer.
4. Calculate the error in the outputs

Error= Actual Output – Desired Output

5. Travel back from the output layer to the hidden layer to adjust the weights such that the
error is decreased.

Keep repeating the process until the desired output is achieved

Why We Need Backpropagation?


Most prominent advantages of Backpropagation are:

• Backpropagation is fast, simple and easy to program


• It has no parameters to tune apart from the numbers of input
• It is a flexible method as it does not require prior knowledge about the network
• It is a standard method that generally works well
• It does not need any special mention of the features of the function to be learned.

You might also like