0% found this document useful (0 votes)
156 views

Vision Transformers (ViT) in Image Recognition - Full Guide - Viso - Ai

Vision Transformers (ViT) have emerged as a competitive alternative to convolutional neural networks (CNNs) for image recognition tasks. A ViT splits images into patches and processes them with self-attention layers like a Transformer model for natural language. ViTs can outperform CNNs with fewer computational resources. While CNNs use pixel arrays, ViTs divide images into visual tokens and embed each patch before passing through a Transformer encoder. Attention maps in ViTs help the model learn global dependencies to reconstruct image structure for classification and other tasks.

Uploaded by

S Vasu Krishna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views

Vision Transformers (ViT) in Image Recognition - Full Guide - Viso - Ai

Vision Transformers (ViT) have emerged as a competitive alternative to convolutional neural networks (CNNs) for image recognition tasks. A ViT splits images into patches and processes them with self-attention layers like a Transformer model for natural language. ViTs can outperform CNNs with fewer computational resources. While CNNs use pixel arrays, ViTs divide images into visual tokens and embed each patch before passing through a Transformer encoder. Attention maps in ViTs help the model learn global dependencies to reconstruct image structure for classification and other tasks.

Uploaded by

S Vasu Krishna
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.

ai

(https://viso.ai)

DEEP LEARNING (HT TPS://VISO.AI/BLOG/DEEP-LEARNING/)

Vision Transformers (ViT) in Image Recognition –


2022 Guide

Explore No-code AI vision 


(https://viso.ai/)
(htt
ps:/
(htt /w
ps:/ ww. Gaudenz Boesch(https://viso.ai/author/gaudenz-boesch/)
/twi link
tter. edin
co .co
m/v m/c
iso_ om
In 2022, the Vision Transformer (ViT) emerged as a competitive alternative to convolutional
neural pan
ai) networks (https://viso.ai/deep-learning/ann-and-cnn-analyzing-differences-and-
y/vi (CNNs) that are currently state-of-the-art in computer vision and therefore widely
similarities/)
soai
used in different image recognition tasks. ViT models outperform the current state-of-the-art
/)
(CNN) by almost x4 in terms of computational efficiency and accuracy.

Transformer models have become the de-facto status quo in Natural Language Processing
(NLP). In computer vision research, there has recently been a rise in interest in Vision
Transformers (ViTs) and Multilayer perceptrons (MLPs) (https://viso.ai/deep-learning/deep-
neural-network-three-popular-types/).

This article will cover the following topics:

What is a Vision Transformer (ViT)?

Using ViT models in Image Recognition

How do Vision Transformers work?

Use Cases and applications of Vision Transformers

https://viso.ai/deep-learning/vision-transformer-vit/ 1/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

(https://viso.ai)

Vision Transformer (ViT) in Image Recognition

While the Transformer architecture has become the highest standard for tasks involving Natural
Language Processing (NLP) (https://viso.ai/deep-learning/natural-language-processing/), its
use cases relating to Computer Vision (CV) (https://viso.ai/computer-vision/what-is-computer-
vision/) remain only a few. In computer vision, attention is either used in conjunction with
convolutional networks (CNN) or used to substitute certain aspects of convolutional networks
while keeping their entire composition intact. Popular image recognition algorithms include
ResNet (https://viso.ai/deep-learning/resnet-residual-neural-network/), VGG
(https://viso.ai/deep-learning/vgg-very-deep-convolutional-networks/), YOLOv3
(https://viso.ai/deep-learning/yolov3-overview/), and YOLOv7 (https://viso.ai/deep-
learning/yolov7-guide/).

The concept of widely popular Convolutional Neural Networks (CNN) (https://viso.ai/deep-


learning/ann-and-cnn-analyzing-differences-and-similarities/)

However, this dependency on CNN is not mandatory, and a pure transformer applied directly to
sequences of image patches can work exceptionally well on image classification
(https://viso.ai/computer-vision/image-classification/) tasks.

Recently, Vision Transformers (ViT) have achieved highly competitive performance in


benchmarks for several computer vision applications, such as image classification, object
detection (https://viso.ai/deep-learning/object-detection/), and semantic image segmentation
(https://viso.ai/deep-learning/image-segmentation-using-deep-learning/).

https://viso.ai/deep-learning/vision-transformer-vit/ 2/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

What is a Vision Transformer (ViT)?


(https://viso.ai)
The Vision Transformer (ViT) model was introduced in a research paper published as a
conference paper at ICLR 2021 titled “An Image is Worth 16*16 Words: Transformers for Image
Recognition at Scale”. It was developed and published by Neil Houlsby, Alexey Dosovitskiy, and 10
more authors of the Google Research Brain Team.

The fine-tuning code and pre-trained ViT models are available on the GitHub of Google Research.
You find them here (https://github.com/google-research/vision_transformer). The ViT models
were pre-trained on the ImageNet and ImageNet-21k datasets.

Are Transformers a Deep Learning method?

A transformer in machine learning (https://viso.ai/deep-learning/deep-learning-vs-machine-


learning/) is a deep learning model that uses the mechanisms of attention, differentially weighing
the significance of each part of the input data. Transformers in machine learning are composed
of multiple self-attention layers. They are primarily used in the AI subfields of natural language
processing (NLP) and computer vision (https://viso.ai/computer-vision/what-is-computer-
vision/) (CV).

Transformers in machine learning hold strong promises (https://arxiv.org/abs/2105.07581)


toward a generic learning method that can be applied to various data modalities, including the
recent breakthroughs in computer vision achieving state-of-the-art standard accuracy with better
parameter efficiency.

Difference between CNN and ViT (ViT vs. CNN)

Vision Transformer (ViT) achieves remarkable results compared to convolutional neural networks
(CNN) while obtaining fewer computational resources for pre-training. In comparison to
convolutional neural networks (CNN), Vision Transformer (ViT) show a generally weaker inductive
bias resulting in increased reliance on model regularization or data augmentation
(https://viso.ai/computer-vision/image-data-augmentation-for-computer-vision/) (AugReg)
when training on smaller datasets.

The ViT is a visual model based on the architecture of a transformer originally designed for text-
based tasks. The ViT model represents an input image as a series of image patches, like the
series of word embeddings used when using transformers to text, and directly predicts class

https://viso.ai/deep-learning/vision-transformer-vit/ 3/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

labels for the image. ViT exhibits an extraordinary performance when trained on enough data,
(https://viso.ai)
breaking the performance of a similar state-of-art CNN with 4x fewer computational resources.

These transformers have high success rates when it comes to NLP models and are now also
applied to images for image recognition tasks. CNN uses pixel arrays, whereas ViT splits the
images into visual tokens. The visual transformer divides an image into fixed-size patches,
correctly embeds each of them, and includes positional embedding as an input to the
transformer encoder. Moreover, ViT models outperform CNNs
(https://arxiv.org/abs/2105.07581) by almost four times when it comes to computational
efficiency and accuracy.

The self-attention layer in ViT makes it possible to embed information globally across the overall
image. The model also learns on training data to encode the relative location of the image
patches to reconstruct the structure of the image.

The transformer encoder includes:

Multi-Head Self Attention Layer (MSP): This layer concatenates all the attention outputs
linearly to the right dimensions. The many attention heads help train local and global
dependencies in an image.

Multi-Layer Perceptrons (MLP) Layer: This layer contains a two-layer with Gaussian Error
Linear Unit (GELU).

Layer Norm (LN): This is added prior to each block as it does not include any new
dependencies between the training images. This thereby helps improve the training time
and overall performance.

Moreover, residual connections are included after each block as they allow the components to
flow through the network directly without passing through non-linear activations.

In the case of image classification, the MLP layer implements the classification head. It does it
with one hidden layer at pre-training time and a single linear layer for fine-tuning.

https://viso.ai/deep-learning/vision-transformer-vit/ 4/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

(https://viso.ai)

Raw images (left) with attention maps of the ViT-S/16 model (right). – Source
(https://arxiv.org/abs/2106.01548)

What are attention maps of ViT?

Attention, more specifically, self-attention is one of the essential blocks of machine learning
transformers. It is a computational primitive used to quantify pairwise entity interactions that
help a network to learn the hierarchies and alignments present inside input data. Attention has
proven to be a key element for vision networks to achieve higher robustness.

https://viso.ai/deep-learning/vision-transformer-vit/ 5/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

(https://viso.ai)

Visualization of attention maps of ViT on images from ImageNet-A- Source


(https://arxiv.org/abs/2105.07581)

Vision Transformer ViT Architecture

The overall architecture of the vision transformer model is given as follows in a step-by-step
manner:

1. Split an image into patches (fixed sizes)

2. Flatten the image patches

3. Create lower-dimensional linear embeddings from these flattened image patches

4. Include positional embeddings

https://viso.ai/deep-learning/vision-transformer-vit/ 6/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

5. Feed the sequence as an input to a state-of-the-art transformer encoder


(https://viso.ai)
6. Pre-train the ViT model with image labels, which is then fully supervised on a big dataset

7. Fine-tune the downstream dataset for image classification

Vision Transformer ViT Architecture – Source (https://github.com/google-


research/vision_transformer)

While the ViT full-transformer architecture is a promising option for vision processing tasks, the
performance of ViTs is still inferior to that of similar-sized CNN alternatives (such as ResNet)
when trained from scratch on a mid-sized dataset such as ImageNet.

https://viso.ai/deep-learning/vision-transformer-vit/ 7/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

(https://viso.ai)

2021 Performance benchmark comparison of Vision Transformers (ViT) with ResNet


(https://viso.ai/deep-learning/resnet-residual-neural-network/) and MobileNet when trained
from scratch on ImageNet. – Source (https://arxiv.org/abs/2101.11986)

How does a Vision Transformer (ViT) work?

The performance of a vision transformer model depends on decisions such as that of the
optimizer, network depth, and dataset-specific hyperparameters. Compared to ViT, CNNs are
easier to optimize.

The disparity on a pure transformer is to marry a transformer to a CNN front end. The usual ViT
stem leverages a 16*16 convolution with a 16 stride. In comparison, a 3*3 convolution with stride
2 increases the stability and elevates precision.

CNN turns basic pixels into a feature map. Later, the feature map is translated by a tokenizer into
a sequence of tokens that are then inputted into the transformer. The transformer then applies
the attention technique to create a sequence of output tokens. Eventually, a projector reconnects
the output tokens to the feature map. The latter allows the examination to navigate potentially
crucial pixel-level details. This thereby lowers the number of tokens that need to be studied,
lowering costs significantly.

Particularly, if the ViT model is trained on huge datasets that are over 14M images, it can
outperform the CNNs. If not, the best option is to stick to ResNet (https://viso.ai/deep-
learning/resnet-residual-neural-network/) or EfficientNet. The vision transformer model is trained

https://viso.ai/deep-learning/vision-transformer-vit/ 8/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

on a huge dataset even before the process of fine-tuning. The only change is to disregard the
(https://viso.ai)
MLP layer and add a new D times KD*K layer, where K is the number of classes of the small
dataset.

To fine-tune in better resolutions, the 2D representation of the pre-trained position embeddings is


done. This is because the trainable liner layers model the positional embeddings.

Real-World Vision Transformer (ViT) Use Cases and Applications

Vision transformers have extensive applications in popular image recognition


(https://viso.ai/computer-vision/image-recognition/) tasks such as object detection
(https://viso.ai/deep-learning/object-detection/), segmentation (https://viso.ai/deep-
learning/image-segmentation-using-deep-learning/), image classification
(https://viso.ai/computer-vision/image-classification/), and action recognition. Moreover, ViTs
are applied in generative modeling and multi-model tasks, including visual grounding, visual-
question answering,
RELATED ARTICLESand visual reasoning.

Video forecasting and activity recognition are all parts of video processing that require ViT.
Moreover, image enhancement, colorization, and image super-resolution also use ViT models.
Last but not least, ViTs have numerous applications in 3D analysis, such as segmentation and
point cloud classification.

Conclusion

The vision transformer model uses multi-head self-attention in Computer Vision without requiring
image-specific biases. The model splits the images into a series of positional embedding
patches, which are processed by the transformer encoder. It does so to understand the local and
global features that the image possesses. Last but not least, the ViT has a higher precision rate
on a large dataset with reduced training time.
Being a Computer Vision Engineer in 2023 (https://viso.ai/computer-
vision/computer-vision-engineer/)
Learn what computer vision engineers do, what skills are required to be a successful computer vision
engineer, and the job outlook in 2023.
What’s next
Read More » (https://viso.ai/computer-vision/computer-vision-engineer/)
Read more about related topics and other state-of-the-art methods in machine learning, image
processing, and recognition.

https://viso.ai/deep-learning/vision-transformer-vit/ 9/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

(https://viso.ai/applications/fall-detection-vision-deep-learning-application/)
Optical Character Recognition (OCR) (https://viso.ai/computer-vision/optical-character-
(https://viso.ai)
recognition-ocr/)

Supervised vs Unsupervised Learning for Computer Vision (https://viso.ai/deep-


learning/supervised-vs-unsupervised-learning/)

Object Detection Today: The Definitive Guide (https://viso.ai/deep-learning/object-


detection/)

YOLOR – You Only Learn One Representation (https://viso.ai/deep-learning/yolor/)


Fall Detection: Vision Deep Learning Application
Introduction to MLOps – Methods To Deliver Machine Learning (https://viso.ai/deep-
(https://viso.ai/applications/fall-detection-vision-deep-learning-
learning/mlops-deliver-machine-learning/)
application/)
Vision-based fall detection systems to detect accidental falls: A deep learning application to support
elderly people.

Read More » (https://viso.ai/applications/fall-detection-vision-deep-learning-application/)

All-in-one platform to build computer vision applications


without code

Show me more 
(https://viso.ai/)

viso.ai

Product (https://viso.ai/platform/) Features (https://viso.ai/platform/)

Overview (https://viso.ai/features/) Computer Vision


(https://viso.ai/platform/computer-vision/)

https://viso.ai/deep-learning/vision-transformer-vit/ 10/11
1/25/23, 4:54 PM Vision Transformers (ViT) in Image Recognition: Full Guide - viso.ai

Evaluation Guide (https://viso.ai/evaluation- Visual Programming


(https://viso.ai)
guide/) (https://viso.ai/platform/low-code-computer-
Feature Index (https://viso.ai/feature-index/) vision/)

Academy (https://viso.ai/academy/) Cloud Workspace


(https://viso.ai/platform/cloud-workspace/)
Security (https://viso.ai/security/)
Analytics Dashboard
Privacy (https://viso.ai/privacy/)
(https://viso.ai/platform/data-analytics/)
Solutions (https://viso.ai/solutions)
Device Management
Pricing (https://viso.ai/pricing/)
(https://viso.ai/platform/device-management/)
End-to-End Suite (https://viso.ai/platform)

Industries (https://viso.ai/solutions/) Resources (https://viso.ai/blog/)

Agriculture Blog (https://viso.ai/blog/)


(https://viso.ai/solutions/agriculture/) Learn (https://viso.ai/academy/)
Healthcare Evaluation (https://viso.ai/evaluation-guide/)
(https://viso.ai/solutions/healthcare/)
Support (https://support.viso.ai/)
Manufacturing
Whitepaper (https://viso.ai/viso-suite-
(https://viso.ai/solutions/manufacturing/)
whitepaper/)
Retail (https://viso.ai/solutions/retail/)
Security (https://viso.ai/solutions/?
tx_industry=security)
Smart City (https://viso.ai/solutions/smart-
city/)
Technology
(https://viso.ai/solutions/technology/)
Transportation
(https://viso.ai/solutions/transportation/)

About (https://viso.ai/company/)

Company (https://viso.ai/company/)
Careers (https://viso.ai/jobs/)
Terms (https://viso.ai/terms-of-service/)
Contact (https://viso.ai/contact/)

© 2023 viso.ai Imprint (https://viso.ai/imprint/)

(htt
https://viso.ai/deep-learning/vision-transformer-vit/ 11/11

You might also like