0% found this document useful (0 votes)
139 views

Convolutional Neural Networks

Convolutional neural networks are commonly used for image recognition tasks. A basic CNN structure includes convolutional layers to extract features, pooling layers to reduce dimensions, and fully connected layers at the end for classification. Popular CNN models that have achieved state-of-the-art results on image datasets include LeNet, AlexNet, GoogleNet, VGGNet, and ResNet. These models have progressively increased in depth and complexity over time.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
139 views

Convolutional Neural Networks

Convolutional neural networks are commonly used for image recognition tasks. A basic CNN structure includes convolutional layers to extract features, pooling layers to reduce dimensions, and fully connected layers at the end for classification. Popular CNN models that have achieved state-of-the-art results on image datasets include LeNet, AlexNet, GoogleNet, VGGNet, and ResNet. These models have progressively increased in depth and complexity over time.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Convolutional neural network for

Image recognition

By Yunzhe Xue

• Reference

• http://cs231n.github.io/convolutional-networks/
Dense neural network and Convolutional neural
network
A simple CNN structure

CONV: Convolutional kernel layer


RELU: Activation function
POOL: Dimension reduction layer
FC: Fully connection layer
Convolutional kernel
This is a gif image
Convolutional kernel

Padding on the input


volume with zeros
in such way that the
conv layer does not
alter the spatial
dimensions of the
input
Rectified linear unit , ReLU
Pooling layer
Pooling
MNIST dataset
The MNIST database of handwritten
digits,
available from this page,
has a training set of 60,000 examples,
and a test set of 10,000 examples.
It is a subset of a larger set available
from NIST.
The digits have been size-normalized
and centered in a fixed-size image.
LeNet-5 for MNIST
CIFAR10 dataset and state of the art
The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes,
with 6000 images per class. There are 50000 training images and 10000 test images.
ImageNet
• The ImageNet project is a large visual database designed for
use in visual object recognition software research. As of 2016,
over ten million URLs of images have been hand-annotated by
ImageNet to indicate what objects are pictured; in at least one
million of the images, bounding boxes are also provided. [1] The
database of annotations of third-party image URL's is freely
available directly from ImageNet; however, the actual images
are not owned by ImageNet.[2] Since 2010, the ImageNet project
runs an annual software contest, the ImageNet Large Scale
Visual Recognition Challenge (ILSVRC), where software
programs compete to correctly classify and detect objects and
scenes.
Case studies

• LeNet. The first successful applications of Convolutional Networks


were developed by Yann LeCun in 1990’s. Of these, the best known
is the LeNet architecture that was used to read zip codes, digits, etc.

• AlexNet. The first work that popularized Convolutional Networks in


Computer Vision was the AlexNet, developed by Alex Krizhevsky,
Ilya Sutskever and Geoff Hinton. The AlexNet was submitted to the 
ImageNet ILSVRC challenge in 2012 and significantly outperformed
the second runner-up (top 5 error of 16% compared to runner-up with
26% error). The Network had a very similar architecture to LeNet,
but was deeper, bigger, and featured Convolutional Layers stacked on
top of each other (previously it was common to only have a single
CONV layer always immediately followed by a POOL layer).
Case studies
• GoogLeNet. The ILSVRC 2014 winner was a Convolutional
Network from Szegedy et al. from Google. Its main contribution
was the development of an Inception Module that dramatically
reduced the number of parameters in the network (4M,
compared to AlexNet with 60M). Additionally, this paper uses
Average Pooling instead of Fully Connected layers at the top of
the ConvNet, eliminating a large amount of parameters that do
not seem to matter much. There are also several followup
versions to the GoogLeNet, most recently Inception-v4.
Case studies
• VGGNet. The runner-up in ILSVRC 2014 was the network from
Karen Simonyan and Andrew Zisserman that became known as the 
VGGNet. Its main contribution was in showing that the depth of the
network is a critical component for good performance. Their final best
network contains 16 CONV/FC layers and, appealingly, features an
extremely homogeneous architecture that only performs 3x3
convolutions and 2x2 pooling from the beginning to the end. Their 
pretrained model is available for plug and play use in Caffe. A
downside of the VGGNet is that it is more expensive to evaluate and
uses a lot more memory and parameters (140M). Most of these
parameters are in the first fully connected layer, and it was since found
that these FC layers can be removed with no performance downgrade,
significantly reducing the number of necessary parameters.
Case studies
• ResNet. Residual Network developed by Kaiming He et al. was
the winner of ILSVRC 2015. It features special skip
connections and a heavy use of batch normalization. The
architecture is also missing fully connected layers at the end of the
network. The reader is also referred to Kaiming’s presentation (
video, slides), and some recent experiments that reproduce these
networks in Torch. ResNets are currently by far state of the art
Convolutional Neural Network models and are the default choice
for using ConvNets in practice (as of May 10, 2016). In particular,
also see more recent developments that tweak the original
architecture from Kaiming
He et al. Identity Mappings in Deep Residual Networks
 (published March 2016).
VGG-16 GoogleNet ResNet

You might also like