Convolutional Neural Networks
Convolutional Neural Networks
Figure 2: The result of using a kernel which extracts edges in vertical direction
1.Forward propagation(FP).
2.Back propagation(BP).
The main goal of back propagation is to update each of the weights in the
network so that they cause the predicted output to be closer to the target
output, thereby minimizing the error for each output neuron and the network
as a whole. The steps of back propagation algorithm are[2]:
Selecting an input couple (the normal input x, and the desired output
d(x))
Fully-connected layer
Feature extraction
layer and Convolution
Max-pooling layer
layer
Loss function is a method of evaluating how well the algorithm models the
dataset. Most machine learning algorithms use some sort of loss function in
the process of optimization, or finding the best parameters (weights) for
data, the parameters of the network are changed based on the gradient of the
loss function, using gradient decent optimization. Gradient Descent is used
while training a machine learning model. It is an optimization algorithm,
based on a convex function, which tweaks its parameters iteratively to
minimize a given function to its local minimum.
When the output of the algorithm does not match the desired output, loss
function takes large values, and it takes small values when there is a good
correspondence between the output and the desired output.
One of the more common loss functions to be used for Convolutional
Neural Network algorithms is the Soft-Max loss function. This is a
generalization of the binary Logistic Regression classifier for multiple
classes. The typical implementation of the loss function does not
differentiate between classes, meaning that miss-classifying a pixel as class
1, when it should be class 0 produces the same amount of loss as miss-
classifying a pixel as class 0, when it should be class 1.