Deep Learning_Lecture 4_CNNs
Deep Learning_Lecture 4_CNNs
Deep Learning
Presented by : Dr.Hanaa Bayomi
[email protected]
Convolutional Neural Network
CNN
Multi-layer perceptron and image processing
➢Like almost every other neural networks they are trained with a version of
the back-propagation algorithm.
1. local connectivity
•Each hidden unit is connected only to a sub region (patch) of
the input image
• It is connected to all channels = receptive field
- 1 if greyscale image
- 3 (R, G, B) for color image
Convolutional neural Network
Convolutional networks leverage these ideas
2. parameter sharing
1. convolution layer
2. ReLU (rectified linear units) layer (element wise threshold)
3. pooling layer
4. fully connected layer
5. loss layer (during the training process)
1- Convolution layer
a convnet processes an image using a matrix of weights called filters (or
features) that detect specific attributes such as diagonal edges, vertical
edges, etc. Moreover, as the image progresses through each layer, the
filters are able to recognize more complex attributes.
Convolution layer
The convolution layer is always the first step in a convnet. Let's say
we have a 10 x 10 pixel image, here represented by a 10 x 10 x 1
matrix of numbers:
Convolution Example
• The addition of the ReLU layer allows the neural network to account for non-linear
relationships, i.e. the ReLU layer allows the convnet to account for situations in which
the relationship between the pixel value inputs and the convnet output is not linear.
• The ReLU function takes a value y and returns 0 If y is negative and y if y is positive.
f(x) = max(0,y)
Rectified linear (ReLU) : max(0,y)
- Simplifies backprop
- Makes learning faster
- Make feature sparse
2- ReLU Layer f(x) = max(0,x)