Traffic Signs Recognition-MiniProject Report
Traffic Signs Recognition-MiniProject Report
ON
By
1) SAKSHI SINGH(62)
2) KOMAL LOGADE(42)
3) KARTIK WAGHELA(76)
DEPARTMENT OF COMPUTER
ENGINEERING SARASWATI
COLLEGE OF ENGINEERING,
SECTOR-5, KHARGHAR, NAVI
MUMBAI-410210 UNIVERSITY OF
MUMBAI
This is to certify that the requirements for the project report entitled “Traffic
Sign Recognition” have been successfully completed by the following students:
Internal Guide
Mission:
Vision:
Mission:
This paper deals with object recognition in outdoor environments. In this type of environments,
lighting conditions cannot be controlled and predicted, objects can be partially occluded, and
their position and orientation is not known a priori.
The chosen type of objects is traffic or road signs, due to their usefulness for sign
maintenance, inventory in highways and cities, Driver Support Systems and Intelligent
Autonomous Vehicles.
A genetic algorithm is used for the detection step, allowing an invariance localisation to
changes in position, scale, rotation, weather conditions, partial occlusion, and the presence of
other objects of the same colour.
A neural network achieves the classification. The global system not only recognises the
traffic sign but also provides information about its condition or state.
Keywords:
Genetic algorithms,
Neural networks,
Traffic sign recognition,
Driver support systems,
Intelligent vehicles,
Intelligent transportation systems.
.
TABLE OF CONTENT
CHAPTER NO. TITLE PAGE NO.
1. INTRODUCTION 1
3. DETAILED DESIGN 3
4. SCREENSHOTS OF PROJECT 7
5. CONCLUSION 8
CHAPTER 1
INTRODUCTION:
Traffic sign recognition is a multi-category classification problem with unbalanced class
frequencies. It is a challenging real-world computer vision problem of high practical relevance,
which has been a research topic for several decades. Many studies have been published on this
subject and multiple systems, which often restrict themselves to a subset of relevant signs, are
already commercially available in new high- and mid-range vehicles. Nevertheless, there has
been little systematic unbiased comparison of approaches and comprehensive benchmark
datasets are not publicly available.
There are several different types of traffic signs like speed limits, no entry, traffic signals, turn
left or right, children crossing, no passing of heavy vehicles, etc. Traffic signs classifications is
the process of identifying which class a traffic sign belongs to. In this python project example,
we will build a deep neural network model that can classify traffic signs present in the image
into different categories. With this model, we are able to read and understand traffic signs
which are a very important task for all autonomous vehicles.
We chose this project due to its wide range of applications that a system with capabilities
provides. This is an attempt to make self learning system that can itself understand and
recognize/interpret traffic signs.
The system has to be able to detect traffic signs independently of their appearance in the image.
Because of that, it has to be invariant to:
1. Perspective Distortion
2. Lighting Changes
3. Partial Occlusions
4. Shadows
1
CHAPTER 2
HARDWARE SPECIFICATIONS :-
1. RAM : 512 MB RAM
2. Hard Drive : 40 GB Hard Drive
3. Processor : Intel Core 2 Processor
4. camera module (Webcam)
5. Projector
6. Colour Markers
SOFTWARE REQUIREMENTS
1. Python (3.7.4 used)
2. IDE (Jupyter used)
2
CHAPTER 3
DESIGN DETAILS :
Frameworks required are:-
ALGORITHM:
The algorithms used in character recognition can be divided into three categories: Image Pre-
processing, Feature Extraction, and Classification. They are normally used in sequence – image
pre-processing helps makes feature extraction a smoother process, while feature extraction is
necessary for correct classification.
Pre-processing :-
The second method called as Pre-processing is the entry method for recognition of character
and very important in deciding the recognition rate. Preprocessing works to normalize the
strokes and also remove variations that can reduce the rate of accuracy. Preprocessing mainly
works on the various distortions like the irregular text size, points missed during the pen
movement, jitters, left-right bend and uneven spaces.
3
Segmentation :
Segmentation is used to convert input image consisting of many characters into the individual
characters. The techniques used are word, line and character segmentation. It is generally
performed by dividing single characters from the word picture. Moreover, a content is
processed in a way that is tree like. In initial scenario, row histogram is used to segment the
lines. Then after, every level, characters are retrieved by technique called histogram and then
finally getting it retrieved.
Feature Extraction :
The aim of feature extraction is to allow the extraction of pattern which is most important for
the classification. Some of the Feature extraction techniques like Principle Component
Analysis (PCA), Scale Invariant Feature Extraction (SIFT), Linear Discriminant Analysis
(LDA), Histogram, Chain Code (CC), zoning and Gradient based features can be applied to
extract the features of individual characters.
All of these features are used to train the given system. Each of the segmented image is taken
of some pixel of dimension 28 * 28.
This can be performed to be as a big array of numbers. There by flattening the array into a
vector of 28*28 = 784 numbers. Thus, the image now converges to a minimal bunch of arrays
in a 784-cell dimension of a highly efficient structure. The image now becomes a tensor of n
dimensional array.
Classification :
The decision making is done in the classification phase. For recognizing the characters, the
extracted features are used. Different classifiers like SVM and Neural Networks are used. The
classifiers sorts the given input feature with reserved pattern and find the best matching class
for input, for which Soft Max Regression is used.Soft Max regression assigns the probability to
each result thus classification becomes easy. It basically first adds up all the evidences it gets
by the below formula and then convert that into the possible probabilities .
WORKING:
Traffic sign detection is usually based on the shape and color attributes of traffic signs, and
traffic sign recognition is often used with classifiers, such as convolutional neural networks
(CNNs) and SVM with discriminative features. It is not difficult for human beings to
distinguish traffic signs from a background, so for a computer detection system, color
4
information is also an important feature.
Traffic signs are designed such that they appear unique and easily identifiable to the
human eye. Traffic signs in the United States of America are of 3 main colors: Red,
White, and Yellow. Other colors like orange and blue are also used. In our approach we
concentrate on Red, White, and Yellow traffic signs. Since the color of a traffic sign is
unique in a background we can use the color information to narrow down our areas of
interest (parts potentially containing the traffic sign). Since RGB colored images are
susceptible to variations in lighting, we use HSV (Hue, Saturation, and Variation)
images. Once we have the HSV image our next goal is to define our areas of interest
(i.e. range of Yellow, Red and White) so that we can segment our HSV image based on
these 3 colors. The next step is to use these color ranges and create binary masks for
each of the 3 colors. For Example, the red binary mask will have 0 assigned to all the
regions which are not in the red range and 1 assigned to all regions which are in the red
range. We know that traffic signs are usually occur in different closed shaped like
rectangles, triangles, diamonds etc. We can use this property to extract closed shaped
from each of the 3 binary masks. This can be done by using ‘Topological Structural
Analysis of Digitized Binary Images by Border’ [5]. We used the OpenCV
implementation of tis algorithm.
From the extracted areas of interests in the previous step we want to determine if it is a
sign or not and if it is a sign we wish to know what the type of sign it actually is. For
this purpose, we can train a convolutional neural network. The data used to train and
test the CNN was obtained from http://cvrr.ucsd.edu/LISA/lisa-traffic-signdataset.html.
It had about 6000 frames and 49 different types of traffic signs. For each frame, the
coordinate positions for the traffic sign in the image was given. From these positions the
traffic signs were cropped out to use for training the CNN. A CNN is basically inspired
by the connections between the neurons in the visual cortex of animals. [7]Since traffic
signs have unique shapes inside them like arrows, words, circles and so on. It is useful
to convert the traffic sign into a more useful form by using a Laplacian operation on the
traffic sign.
CHAPTER 4
SCREENSHOT OF PROJECT:
7
CHAPTER 5
CONCLUSION:
The algorithm that has been used for traffic signs it can be generalized to deal with other kinds
of objects. The known difficulties that exist for object recognition in outdoor environments
have been considered. This way the system is immune to lighting changes, occlusions and
object deformation being useful for Driver Support Systems.
Due to this knowledge of the sign status, it is believed that the system is useful for other
applications such as maintenance and inventories of traffic sign in highways and or cities.
FUTURE SCOPE:
Future improvements can be made for extracting signs from test images by using advanced
segmentation methods.
REFERENCE:
[1] https://bartlab.org/Dr.%20Jackrit's%20Papers/ney/3.KRS036_Final_Submission.pdf [2]
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.695.3606&rep=rep1&type=pdf [3]
http://cvrr.ucsd.edu/LISA/lisa-traffic-sign-dataset.html [4]
http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf [5] Suzuki, S. and Abe, K.,
Topological Structural Analysis of Digitized Binary Images by Border Following. CVGIP 30 1,
pp 32-46 (1985)
[6]http://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/find_contours/find_con
tours.html [7] https://en.wikipedia.org/wiki/Convolutional_neural_network
8
9
10
11