Week 11 - Convolutional
Week 11 - Convolutional
i.e., the function output is the same even after the transformation is
applied.
Invariance example
e.g., Image classification
• Image has been translated, but we want our classifier to give the same result
Equivariance
• A function f[x] is equivariant to a transformation t[] if:
3 weights, 1 bias
weights, D biases
Special case of fully-connected
network
• Kernel size?
• Stride?
• Dilation?
• Zero padding / valid?
Question 2
• Kernel size?
• Stride?
• Dilation?
• Zero padding / valid?
Question 3
• Kernel size?
• Stride?
• Dilation?
• Zero padding / valid?
Convolutional networks
• Networks for images
• Invariance and equivariance
• 1D convolution
• Convolutional layers
• Channels
• Convolutional network for MNIST 1D
Channels
• The convolutional operation averages together the inputs
• Plus passes through ReLU function
• Has to lose information
• Solution:
• apply several convolutions and stack them in channels
• Sometimes also called feature maps
Two output channels, one input
channel
Two output channels, one input
channel
Two input channels, one output
channel
How many parameters?
• If there are input channels and kernel size K
Duplicate
Upsampling
Duplicate Max-upsampling
Upsampling
Encoder Decoder
Semantic segmentation results