0% found this document useful (0 votes)
17 views

A Survey On Vehicle Detection and Tracking Algorithms in Real Time Video Surveillance

Uploaded by

Siddu Sankapal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

A Survey On Vehicle Detection and Tracking Algorithms in Real Time Video Surveillance

Uploaded by

Siddu Sankapal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

A Survey On Vehicle Detection And Tracking


Algorithms In Real Time Video Surveillance
Sri Jamiya S, Esther Rani P

Abstract: Automated surveillance systems are of critical importance in traffic management and monitoring any unwanted activities. The intelligent
transportation system plays a crucial role in the field of traffic management to provide an efficient and reliable transportation system. One of the
applications of the intelligent transportation system is to detect and track vehicles accurately. Image processing algorithms has been widely developed to
monitor the motion of vehicles, humans or any other objects. The main aim is to detect and recognize moving objects from real surveillance videos to
avoid congestion on highways and parking areas for the prevention of accidents. In comparison with still images, each and every video frame provide
intelligent information about vehicles in various scenarios that change over time. Many algorithms have been developed to improve efficient real-time
detection of incidents and it is a challenging task for the researchers to determine the driver's behavior even in the diversity of vehicles, weather and light
conditions. In this paper, the detailed overview of object motion detection, classification, and tracking algorithms are presented and also their strengths
and weakness of the various algorithms are discussed.

Index Terms: Intelligent Transportation System, Vehicle detection, Foreground detection, Object classification, Tracking, Feature Extraction, Occlusion,
Surveillance Systems
————————————————————

1. INTRODUCTION generate real-time alarms by the use of various techniques


THE technological advancement of intelligent transportation even in the complex environment like day/dark changes,
systems has been applied in developed countries to monitor occluded conditions like tree shadows, wind blowing, etc. The
traffic movement and for accurate tracking of moving vehicles smart surveillance systems are used for object detection and
by the use of CCTV cameras. In literature, a wide range of tracking in motion which makes the system more dependable
algorithms have been proposed for the implementation of and more robust. An efficient algorithm provides the apparent
intelligent transport systems. In this paper the various movement of object detection, classification, and tracking in
algorithms, their merits, and demerits are discussed in detail. various scenarios, to obtain accurate information.
Video surveillance systems are also widely used to monitor The object detection in motion is the foremost step in
sensitive areas from security point of view for a very long time. analyzing the video frames where it segments the moving
They include first generation (1GSS), second generation objects from the static background objects. The most popularly
(2GSS), and third generation (3GSS). The first generation used techniques for object detection are background
(1GSS, 1960-1980) surveillance systems were generally subtraction, statistical models, frame differencing,
based on analog subsystems for the acquisition of images and temporal differencing and optical flow models. The smart
transmission. The videos of various monitoring cameras are surveillance systems provide robust and precise detection of
sent to a central control room. The major drawbacks in this moving objects, even in the presence of various environmental
system is high bandwidth requirement, difficult to store data conditions like light changes, occluded objects like trees,
and retrieval of videos due to a large number of videotape shadows, etc. The second step is Object classification which
requirements and difficult to monitor live videos as it requires classifies into pedestrian behavior, the flow of vehicles,
human operators with least time span. The second-generation animals, and so on. The moving object classification
(2GSS, 1980-2000) surveillance systems were of hybrid type. techniques can be categorized into two types such as shape
They used both analog and digital subsystems to resolve based and motion based methods. Shape based methods
drawbacks from first generations. They made use of the define spatial information like the width/height coefficient,
advantages of digital video processing to monitor real-time binary edges map and vehicle’s outer contour can be
events by providing assistance to human operators. Third specified. In motion based methods, temporal information are
generation (3GSS, 2000- ) surveillance systems method determined where the median is calculated for each and every
defines end-to-end digital communication systems. Image pixel to classify the vehicles like small sized, medium sized
acquisition and processing were assisted by sensors, and over-sized vehicles. By the use of smart surveillance
communications have been done through mobile and data systems the natural phenomenon such as fire and smoke can
was stored under central servers which uses low cost digital also be sorted. The third step in video processing is object
infrastructure. In 3GSS the CCTV surveillance cameras are tracking, which is capable of tracking moving objects frame by
used to identify the apparent movement of objects, primarily frame simply defined as the creation of temporal
for the purpose of public safety, crime prevention, accidents in correspondence among detected objects from frame to frame.
roadside and highways. The visual surveillance systems can The tracked outputs of moving vehicles are used to construct
———————————————— an efficient transportation system that segments, classifies the
 Sri Jamiya S, Research Scholar, Electronics and Communication moving vehicles. The final step of visual surveillance systems
Engineering, Vel Tech Rangarajan Dr. Sagunthala R &D Institute of is object recognition that recognizes and differentiates the
Science and Technology, moving objects for decision making purpose. The visual
Avadi, Chennai.
surveillance system in computer vision provides very efficient
Email: [email protected]
 Esther Rani P, Professor, Electronics and Communication Engineering, and accurate in vehicle detection and tracking to detect and
Vel Tech Rangarajan Dr. Sagunthala R &D Institute of Science and track the autonomous vehicles for the safety of the public. The
Technology, various algorithms are used for providing the operators with
Avadi, Chennai. exact information of high-level data to make correct decisions
Email: [email protected]
2266
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

and store the information for future verification. The The algorithms like background subtraction techniques,
progression of technology in video surveillance systems, and optical flow methods, statistical methods, frame
safer driver assistance systems provides researchers to differencing, temporal differencing are the subsequently used
understand more effectively and some scenarios and techniques which are described below.
applications are described below.
2.1 BACKGROUND SUBTRACTION
1.1 A SURVEY IN VIDEO SURVEILLANCE The background subtraction technique is widely used for
This survey discusses many research works on objects motion segmentation in many applications. It finds the
classification, detection, and tracking. Such a system is moving regions in images by subtracting the initial image of
required for preventing crime and accidents to ensure the pixels from a referenced background image which is formed
safety of the public. by averaging images. If the subtracted pixel value is greater
than the threshold then it is defined as foreground. To
2 MOVING OBJECT DETECTION enhance detected regions post-processing operations like
Each application that benefits from smart video processing dilation, erosion, and closing are performed to reduce the
have different needs, thus requires different handling objects. noise level. Many approaches for background subtraction
However, they hold something in common like moving objects. technique are performed in terms of foreground detection,
In each and every vision systems, detecting moving objects background maintenance, and post processing. Heikkila and
are common such as people and vehicles in the video. Moving Sliven used the simplest version where is marked as
object detection steps consists of preprocessing, feature foreground by a pixel at location ( ) in the current image
extraction, classification, detection, and tracking. ( ) ( ) (1)
and the predefined threshold is [1]. The (IIR) Infinite
Impulse Response filter was used to update the
background image.
( ) (2)
By eliminating small-sized regions and morphological closing
was used to create foreground pixel maps. Though
background subtraction techniques are effective, they lack in
performance with dynamic changes such as stationary
objects uncover the backgrounds (e.g. a parked bus moves
out of the parking) or sudden light changes.

.2.2 STATISTICAL METHODS


The statistical models has evolved to solve the limitations of
fundamental background subtraction techniques. In the
statistical method, the characteristic of individual pixel or group
of pixels are considered to construct the background frame
and statistics of background can automatically update during
processing. This technique provides more reliable and
effective in several scenarios like illumination changes,
distortion caused by low resolution, roadside trees, and
Figure 1: Flowchart for smart video processing algorithms. shadows. In this system, pixels are represented by its
intensity values (M) minimum and (N) maximum intensity
Though it analyses in sequential steps, it has demerits in an values and (D) maximum intensity difference between any
autonomous detection of vehicles in various situations like succeeding frames noticed at the initial stage where the
indoor and outdoor environment with illumination changes, scene has stationary objects. The pixel is considered as
weather, and occluded conditions causing difficulty in foreground in the current image if it satisfies the condition
detecting and tracking of moving vehicles. below.
( ) (x, y)| > D(x, y)| or ( ) (x, y)| > D(x, y)| (3)
After thresholding, the detected foreground pixel contains
noise which is removed by morphological erosion. In the
eroded areas, the size of pixels was small, to fix this a series
of erosion and dilation was performed on the foreground pixel
map. Small-sized noisy pixel regions are also taken away by
applying connected component labeling. The static area of the
current image was replaced by new images to perform an
operation with the statistics of the background pixels. In
another technique, Stauffer and Grimson [2] presented an
adaptive background mixture model for tracking in real time.
Figure 2: The system block diagram They used online approximations to update the modeled
mixture of Gaussians converted from pixels.

2267
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

2.3 TEMPORAL DIFFERENCING temporal motion features. The proposed method based on
In temporal differencing, the reference image is the previous moving object temporal self-similarity. If an object shows
images. A new image is obtained when the difference between periodic motion, it's self-similarity measure also shows a
the previous frame and the current frame is greater than the periodic motion. The method depends on this clue to
threshold value. This method is highly recommended in categorize moving objects by periodicity. Objects like rigid and
dynamic scene environments. But it has some disadvantages non-rigid are also identified by Optical flow analysis. A. J.
in the detection of moving objects. For eg., If an object is in Lipton., [3] proposed a method which uses local optical flow
single color it fails to detect the whole pixel regions even when technique in regions of detected objects. High residual flow is
it is moving. It is ineffective to static scenes. For high-level present in non-rigid objects (humans) whereas rigid objects
processing and to detect stopped static objects other (vehicles) shows little residual flow. The motion of pedestrians
techniques must be let in. Lipton et al., [3] presented two will have a periodicity due to the generation of residual flow.
frames differencing, the foreground frame is defined by the By using this, it can be differentiated from various objects such
following equation. as vehicles.
( ) ( ) (4)
In order to resolve the defects in two frame differencing Collins 3.2 FIRE DETECTION
et al., [4] used a hybrid three frames differencing Discussions of fire detection in research papers are rare in the
method.Video surveillance and monitoring (VSAM)is the highly computer vision literature. Many methods exploit color and
recommended technique for observing moving objects in the extract the motion of fire features. It also generates false
sequence of images. This hybrid algorithm uses motion alarms in the presence of fire-colored segments. The models
segmentation by combining background subtraction with a like spectral, spatial, temporal are defined to detect a fire in
three-frame differencing technique. This algorithm detects the video. The spectral is defined as a pixel color probability
moving objects quickly. density of fire. The spatial defines the spatial structure of the
fire region and temporal is used to capture the spatial structure
2.4 OPTICAL FLOW changes.
Optical flow is based on motion segmentation and it detects
moving objects, even when the camera is moving. However, it 3.3 OBJECT DETECTION
is computationally complex and has more noise. The most important technique in the field of intelligent
transportation system is object detection. In this technique the
2.5 SHADOW AND LIGHT CHANGE DETECTION targets such as cars and traffic signs are detected. For
The motion detection algorithms described above are used for detection, the shape of the car and spatial and temporal
real-time surveillance for years, which performs well in all information from traffic signs are the extracted features [5], [6].
environments such as indoor and outdoor. However, without Optical flow is a technique in motion segmentation and
special work, most of these algorithms are vulnerable to both detection, it detects moving objects even when the camera is
local (e.g. shadows and highlights) and global illumination moving. This method detects and tracks moving objects in
changes (e.g. The sun being covered/uncovered with clouds). aerial views. Its accuracy is more than background subtraction
Motion detection is inaccurate when the moving objects are method. However, optical flow methods are complex to
followed by shadows. Object classification also fails in the process and has more noise. The result of the methods used
presence of shadows and sudden light changes. In by Horn and Schunck in [7] were promising to perform better
background subtraction and shadow detection method, pixels than the methods used by Lucas and Kanade in [8] for aerial
are represented by a color model that separates brightness views of detecting frames in motion. A lot of research works
from the chromaticity component. The pixels in the image are are carried out based on Horn and Schunck and Lucas and
divided into four types (background, shaded background or Kanade methods for enhancing optical flow. Many algorithms
shadow, high-lighted background and moving foreground are proposed to detect motion in different scenarios [9], [10],
object) by calculating the distortion of brightness and [11], [12], [13]. In [11] indoor fixed cameras are used by
chromaticity between the scene and the current image Optical flow for detection objects in video streams with existing
pixels. The two methods by which the shadows can be brightness. It was applied in the software of motion detection
detected are statistic and video based method. In statistic for analyzing of motion level, motion region and the number of
based method the intensity values in shadow region are less objects. The only drawback of this method is the change in
than that in background is analyzed. In video based method object velocity and lights in the area. In static cameras,
difference between neighboring pixels intensity, geometry, tracking of moving objects are done by combining motion
color, and brightness are calculated. segmentation and optical flow algorithm in [9]. Optical flow
does not depend on foreground or background regions. It
3 OBJECT CLASSIFICATION follows segmentation using pixel by pixel classification. In [14],
The different objects detected in the video contains moving Optical flow algorithm was used in the regions of silhouette 2-
regions such as vehicles, humans, animals, and so on. To way ANOVA and brightness change was minimized by object
track it without difficulty we need to distinguish the types of segmentation. In [15] crowd monitoring in videos was done by
objects in the detected video to analyze it properly. Horn and Schunck. The proposed technique detects and
tracks the outdoor scenes with based optical flow. In [12] the
(5)
outdoor scenes are detected with edge detection and gradient
based on optical flow. The edge detection based techniques
3.1 MOTION BASED CLASSIFICATION are more robust and it is not vulnerable to light changes. For
Motion based classification usually distinguishes non-rigid the cameras in motion, the moving things are detected by
objects (e.g. human) from rigid objects (e.g. vehicles) by classification and motion clustering in [10]. In [9] Fusion Horn

2268
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

and Schunck used small squares in images with aerial color modeling change labels. In [30], Bayesian Markov random
for estimating flow field in the color plane and those fields are field (MRF) method was used to increase the performance in
fused together. Lucas and Kanade, in [16] Optical flow method the use of shape for the detected objects and noise reduction.
is combined with stereo camera for UAVs. In [17], [18] for However, the most important work is to remove the
urban area navigation a combined way of fusion control was background and pixels correlation in the frames. Bayesian
used. Lucas and Kanade provide promising results if the flow algorithm [29] was used to extract the shape of the moving
is constant with the pixels neighborhood to them in [19],[20]. object. It deals very well with duplicate motions of objects,
The equation was derived by the local neighborhood of least variation in light, reducing noise and removing shadows, the
squares in [17], [21]. The Horn and Schunck technique gives results and performance of the algorithm are promising. In
the best results if the flow is smooth throughout the entire [31], sobel filtering method was used for low quality web
frames, e.g. for some neighborhood, the objects motion is not cameras in laptops to process moving images while using the
having any restrictions in [22], [23]. Most of the researchers same low end hardware. The initial algorithm is quite fast and
uses hybrid methods for motion detection; the hybrid method the next one deals with edge detection of the object. The
uses two or more methods of different kinds to get rid of result of the algorithm shows 45.5% time of object detection
motion detection problems. In [17] two techniques of optical with 14% use of memory, maintaining the same level of
flow algorithms were compared and its performance was accuracy. In [32] Enhanced Dynamic Bayesian Network (DBN)
evaluated. In [23] eight methods of optical flow algorithms was technique is used for vehicle detection in surveillance, in the
tested on synthetically generated data with added noise and of aeronautical field and this method is found to be flexible. In
high complexity. The author claims that the performance of [33] multiple objects are tracked using spatial and second
method in [8] provided the best result. The area of moving derivative detection and tracking model. But, it cannot track
object was detected by a hybrid method of temporal difference multiple objects in low quality videos. The Speeded Up Robust
and optical flow in [24]. The difference between frames is Features (SURF) deals to optimize Scale Invariant Feature
calculated by the temporal difference method and the Transform (SIFT) [34]. But SURF processing time is too long.
differential image is filtered using low pass filter and edge Oriented FAST and Rotated Binary Robust Independent
detection techniques. The optical flow algorithm is used to find Elementary Features (ORB) algorithm [6] feature extraction is
the velocity from the spatiotemporal derivative of image processed in an outdoor environment. For binary descriptor,
intensity. In [24] for a static camera the results from temporal the method of Local Difference Binary (LDB) is used. The
difference and optical flow technique were quite promising but image descriptors are matched by K-nearest Neighbor (KNN)
not for the camera in motion. Many motion detection [35]. The local invariant features are extracted by BRIEF alias
algorithms were proposed and most of them use simple ORB in [36]. The BRIEF technique is fast but had a drawback
operation of thresholding on the difference in image intensity to noise. The advantage of BRIEF is it deals with two major
like the initial frame is compared with background frames from problems, ORB is used to detect the corners with the help of
consecutive frames of videos, which depends on algorithms of the Harris method and uses intensity centroid for calculating
the simplest form, yet its performance is not promising in [25]. the rotation of object direction in [37]. The researchers of this
In [26], [27], [28], [29] statistical and probabilistic models are paper developed an algorithm to calculate the details in real
used to increase the performance by background subtraction. time traffic by classification, counting of vehicles and
The performances of these algorithms mainly depend on segmentation. The important goal of this technique is it can
threshold value. In [25] various methods of threshold work with the sudden change in light conditions by using
adaptation are described. By choosing Markov Random Field feature based counting technique in detection of the vehicle
(MRF) in the framework of Bayesian [29] most promising and its tracking [38].
results of detection are found by frame differencing and

Table 1: Literature Survey on Vehicle Detection

Authors Approach Description Pros Cons

It can deal with a sequence of


Optical flow detects moving
images which can be
objects even when the camera is It is insensitive to
Horn and classified as a set rather than
[7] Optical flow also in motion. It deals with the noise and brightness
Schunck unshaped regions in spatial
pattern of lights in the image for levels.
arrangements.
detection.

2269
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

The gradient feature of special It can find the match even with
intensity is used to match the fewer details and also can Image quality should
Lucas and Image Registration
[8] objects in the images with the detect things even the object be higher for matching
Kanade Technique
iteration of Newton-Raphson. is rotated, scaled or sheared. of objects.

When adding
Adaptive Object detection is done by Achieving better performance
JM McHugh, foreground with the
background detecting the change in series of by adapting statistical model,
J Konrad, background the
[25] subtraction with images. By MRF Spatial non parametric background
V Saligrama, detected regions are
Markov Random coherence is improved by change model and MRF model to vary
P Jodoin. getting bigger rather
Field in labels of thresholds. the thresholds.
than shrink.
Automatic vehicle detection is Objects are detected based on
Hsu-Yung
proposed in this paper by using colors, shape and feature It cannot track
Cheng, Enhanced Dynamic
[32] uses spatial and second extraction intensity of pixels, multiple objects in
Chih-Chia Weng, Bayesian Network
derivative detection and tracking then edge detection is done by videos of low quality
Yi-Ying Chen.
model Canny edge detector.

4 CLASSIFICATION METHODS widely used technique. The Multi-Layer Perceptron (MLP)


The Classification methods proposed in this chapter are also called an artificial neural network is one of the
based on non-probabilistic algorithms which provide better important classifiers. This technique uses the feed from
results and they are easy to implement [39], [40]. Some of input data which correlates with output data to form a
these algorithms are given below: Fisher’s Linear graph. MLP consists of various layers of objects. Each
Discriminant algorithm consists of two classes {-1,1}, when node of a layer is connected to nodes of other layers in
the classification is made for an object or an event it must [48]. Adaboost Classifiers is used to develop or increase
have belonged to those two classes. This algorithm was the performance of the classification techniques. If a
used to deal with the problems of binary classification, a classification technique is used to classify an object in
method of class separation was used to distribute the terms of weights, an object with the lowest weight
classes differently and it has low complex issues in considered as unclassified and not properly formed, then its
computation [41]. Quadratic discriminant analysis (QDA) accuracy is also low. The technique creates an algorithm
method works for nominal labels and numerical attributes of which increases weights of low accuracy and decreases
objects. Discriminant analysis is used to determine the the weight of high accuracy objects. This process is
differences between two or more naturally occurring continued until a fine classifier is obtained. Adaboost uses
groups, it may have a descriptive or a predictive objective. various algorithms to enhance performance. Adaboost is
Nearest neighbor method is used to select the metric weak with the data’s outside the margin area but gives
measure which takes all the data from training for optimal results with the classifiers and has improved
classification. Though it is a simple method yet has more performance in [49]. SURF is mainly used for detection of
complex issues and uses other algorithms to increase features like recognition of objects, reconstruction of 3D,
performance. K-Nearest Neighbor provides promising object registration and classification. It is derived partially
results than Nearest neighbor and its performance is also from the descriptor of (Scale Invariant Feature Transform)
better for probability density functions, the value of K must SIFT. SURF is more effective, robust, and has high
be set by methods of validation as in [42]. Also, its performance than SIFT. SURF is highly effective in
computation complexity is high. Support Vector Machines tracking, detection, recognizing of objects and also it forms
(SVM) is based on the statistical learning method with a 3D scenes from those objects in [50]. HOGThe Histogram
structured risk minimization principle [43]. SVM is used to of Oriented Gradients (HOG) is a descriptive feature. In
classify the given input data from two predefined classes. areas like Computer vision, and image processing uses this
The classifier predicts the object which largely depends on descriptor to detect objects. It extracts the gradient features
available data. SVM provides promising results in from the object which is used for object detection in [51],
numerous cases of pattern recognition in [44]. It is not so [52]. ELVP Enhanced Local Vector Pattern (ELVP) is a
accurate in most of the cases. However, a multiclass vector based technique which expresses the structured data
classification is used to fix the problems as in [45]. In in given texture and 1D special structure with the
vehicle classification only two classes exist, either it is a corresponding pixels of the object. LVP is represented with the
vehicle or not. In order to process this, we need a lot of 2D directions of the available patterns with a diverse pattern
data sets with different images of vehicles with different of objects with respect to the reference pixel. Genetic
shapes. Thus the result is more complex and it was not Algorithm (GA) is used to enhance the methods of ELVP,
always accurate in [45]. Margin maximization theory is used SURF, and HOG. It creates genotype from the genome by
to rectify this problem by matching different views of populating strings and phenotypes are created by encoding
vehicles in the dataset classes. SVM is praised as more the solutions of candidates. The results are of binary values
robust tracking and detection algorithm, it is widely used in such as 0s and 1s. GA begins to populate the results in
the aerial motion detection as in [46], [47]. In colored aerial random order by inversion and crossover operations in [52],
videos, the objects are detected using Dynamic Bayesian [53].
Networks with the help of SVM. Neural Network classifier is

2270
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

Table 2: Literature Survey on Vehicle classification


Authors Approach Description Pros Cons

Logical regression
The main difficulty in
Vehicle The dataset of 3074 samples is shows the results had
this method is the
Classification processed for vehicle classification high performance when
usage of datasets,
Denis Kleyko, Roland techniques by D. Kleyko et al using different comparing with other
as it was focused
[54] Hostettler, Wolfgang Comparison by algorithms of machine learning. methods of machine
mainly on single
Birk, and Evgeny Machine learning Various classification techniques learning with the
class which is very
Osipov on roadside are used such as SVM, neural classification rate is
difficult to search
sensors networks and logical regression. 93.4%
while classification.

Vehicles are classified into four Due to similar image


Comparison of
Zezhi Chen, Tim Ellis, different classes car, bus, van and The accuracy of SVM size and shape of
vehicle type:
[55] Sergio A Velastin motorcycle. Two types of methods is 96.26% more robust car, bus and van,
Various Schemes
MIEEE used here, SVM and random forest than RF miscalculation
of Classification
which is a feature. occurs.

The proposed method uses Linear


SIFT Features is The final result
Muhammad Asif SVM to process the data. The The front faced
used to classify accuracy is about 89%
[56] Manzoor, Yasser features are extracted using (SIFT vehicle images are
vehicle Make and against the NTOU-
Morgan )Scale Invariant Transform Feature difficult to classify.
its Model MMR dataset.
derived by M. Mazoor et al [12]

Classification of The three classes of datasets used The main


A.H.S. Lai , vehicle type by by A.H.S. Lai et al [13] are taxi, a The accuracy obtained disadvantages faced
[57] G.S.K. Fung, visual based mini-bus, and a double-decker. for estimation of are vehicles bumper
N.H.C. Yung dimension Estimation of vehicle is based on its vehicles is 92.5% close to road and
estimation length, width, and height vehicle mask.

Distributed method of real time


vehicle detection and classification
vehicle detection In broad daylight the In night and bad
system is proposed by Kul et al.
Seda Kul, Süleyman and classification in results are promising conditions of
[58] [14]. Other techniques used here
Eken, Ahmet Sayar real time video with an accuracy of weather they didn’t
are vehicle classification, feature
streams 89.4% perform any work.
extraction, detection of foreground
and background subtraction.

In Dong et al. [15] method,


Semisupervised Convolutional
Semisupervised
Neural Network is used while the Misclassification
Convolutional In daylight 96.1%
Z. Dong, Y. Wu, M. classification of vehicles. The occurs due to
[59] Neural Network is accuracy is registered
Pei, and Y. Jia dataset consists of 9850 high incorrect labels in
used for vehicle and in Night 89.4%
resolution images are used. The the BIT dataset.
classification
dataset holds only front views of
vehicles.

2271
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

5 VEHICLE TRACKING APPROACHES matching and gradient based matching, it produced more
The object tracking in video processing system is a significant accurate results for tracking the moving vehicles, classification
step to track the motion of objects in visual-based surveillance of objects, foreground and background detection of an object,
systems and it has been a challenging task for many vehicle flow, vehicle count, vehicle velocity and vehicle
researchers nowadays [60]. recognition. In a real time video surveillance of traffic, tracking
and classification of vehicles have illustrated as in [66]. In this
work, counting and classification of vehicles, detection of
traffic lane change, direction and vehicle speed are detected.
Multiple moving vehicles in the heavy traffic are detected and
tracked even for various weather conditions, occluded objects
like trees or shadows. For tracking and locating moving
objects, Kalman filter, background subtraction methods,
morphological processing operations are used for extracting
and identifying the vehicle’s contour.

5.3 3D MODEL BASED TRACKING METHODS


These model based tracking methods involves occlusion
detection, roadside trees, shadows in the moving vehicles and
used a well known 3D model of the solid cuboid that suits to
various types and sizes of vehicle images by varying the
vertices to suit well as in [67]. By changing the region
proportion, prototype width and height with respect to previous
images, it achieves an efficient detection and tracking of
vehicles. The classification of multiple moving vehicles such
as low size vehicles (eg: bike, motorcycle), mid-size
vehicles(eg: car, van, etc.,), heavy vehicles (eg: trucks) are
carried out in [68]. In this paper, the distance was measured
by the use of 3D geometric shape of vehicles. A new
Figure 3: The Object tracking system. framework of a 3D model-based tracking methods for vehicle
detection and tracking based on a region proportion of
The motion of objects such as vehicles has been tracked and boundary feature grouping was presented in [69].This method
located in dynamic scenes by the physical appearance of has an advantage of more flexibility in detecting and tracking
objects, to determine the movement of the blobs between two vehicles and achieves more reliable for many applications.
successive frames in the video analysis in [61]. Various
vehicle tracking methods explained and concluded by many 5.4 FEATURE BASED TRACKING METHODS
researchers for different problems are explained below: The feature based tracking method uses a feature descriptor
of SURF (Speeded Up Robust Features) for a large region of
5.1 REGION BASED TRACKING METHODS feature sets to classify the vehicles in smart surveillance
In region based tracking methods, the particular region of the videos and it performed well in classifying similar and
moving objects like vehicles (blobs) are tracked for locating dissimilarity classes in [70]. A line-based shade method uses a
the vehicles. These regions are segmented by subtracting the linearity feature technique to remove all the shades in the
current image and previous image. A model-based automobile occluded image. It also represented an automatic vehicle
recognizing, vehicle tracking and classification are developed tracking and classification traffic observation system in [71].
in [62] which is efficient and more reliable in several
conditions. This method considered various positions and 5.5 COLOR AND PATTERN BASED TRACKING METHODS
speed of moving vehicles until it is visible and worked on The color and pattern of vehicle image series of traffic video
successive traffic scenes recorded by a static camera for surveillance is a technique used in [72]. This technique was
automobile recognition. This region based model has three used for the segmentation of foreground and background,
levels: raw data (original) images, region level, and vehicle vehicle flow, shade removal, vehicle velocity, vehicle count,
level. The numbering of vehicles and vehicle classification in a vehicle location and this system is proved to work in different
traffic road management system has been illustrated in [63]. climatic conditions and is insensitive to lighting conditions. A
The flow of moving vehicles and classification of vehicles like real time traffic lane management is done by detecting and
car, bus, van are described and the scheme that removes locating the vehicle to avoid congestion on roads, highways for
false regions and shades elimination algorithm is achieved for the safety of the public [73]. There are three important main
more accurate and reliable segmentation of moving vehicles. levels in this system that used are 1D shape patterns, tracking
level, 2D pattern verification. Vehicle tracking plays a crucial
5.2 CONTOUR TRACKING METHODS role in the application of Intelligent Transportation Systems for
These methods depend on contours (the boundaries of the purpose of safety and security. Several algorithms used for
vehicle) of vehicle in the process of vehicle tracking [64]. The vehicle tracking are mean shift algorithm, Camshift algorithm,
video traffic surveillance system for real time supervision optical flow, SURF, etc. Mean shift algorithm is a pattern
approach have proposed by the authors in [65]which makes matching algorithm which uses kernel function of histogram
use of optical flow and track a vehicle in 3D structure. This based targeted vehicle. It is used to track the vehicle efficiently
approach consists of two techniques: color contour based when the blocked area of the targeted object is large [74],[75].

2272
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

Cam Shift (Continuously Adaptive Mean Shift) algorithm is Kalman filter is used to track the multiple moving objects very
based on 1D histogram based object mainly for detecting effectively. The author proposed a framework to track the
faces and produces poor performance where the foreground objects for a mobile robot traveling in crowded scenarios from
object is same as background object or it varies color deep tracking framework [78], [79]. In [80] the author
significantly in [74],[76]. SURF (Speeded Up Robust Features) developed outdoor tracking of moving vehicle based on deep
algorithm is related to two dimensional Harr wavelet response learning framework. The features of the image are learned
and gives better solution than Camshift when the foreground after pre-training a stacked denoising auto encoder and the
object is same as a background object. As the computation next step is to add the k-sparse constraints to the stacked
time is high, this may not be suitable for tracking real time denoising auto encoder (Kssdae) then it is linked with a
objects. Optical flow technique is used to differentiate multiple classification layer to enhance classification neural network. It
objects(foreground) and the background in an image. It is is applied to an online tracker, after the process of fine tuning
dependent on the distance between the movement of objects the evaluation produces a good performance of vehicle
and a scene[77]. tracking after verification.

Table 3: Literature Survey on Vehicle Tracking


Authors Approach Description Pros Cons

The proposed system can


detect, track, classify the This system
S. Gupte proposed a system for
vehicles with fewer data. It insensitive to bad
tracking based on segmentation,
[62] Region-Based Tracking can also provide weather conditions,
S. Gupte, et al. region tracking, vehicle
information about the noise and variable
parameter, and classification.
detected objects such as illumination
its location and velocity.

D. Koller’s paper proposes


Contour tracking technique which
is based on extracting motion &
gray value boundaries, they are The proposed technique is
[64] D. Koller The system is
Contour Tracking derived from spatial and time fast and the results also
[65] A. Ambardekar vulnerable to noise
derivative thresholds in the quite promising.
images. A. Ambardekar used
color based contouring and
gradient based algorithm.

In [47] the proposed model based


tracking methods has described
an occlusion detection process in The proposed
Nelson H. C. The system is very
3D Model-Based the moving vehicles and used a system is not
[67] Yung and Andrew effective in the detection of
Tracking well known 3D model of the solid tested on vehicles
H. S. Lai vehicle occlusion.
cuboid that suits to various types in Traffic.
and sizes of vehicle images by
varying the vertices to suit well.

When the view is


The proposed method is feature
changed the
Xiaoxu Ma based tracking method which The proposed approach
Feature-Based system is
[70] W. Eric L. uses feature descriptor of SIFT provides better
Tracking ineffective and
Grimson for tracking. It forms a rich performance.
occlusion also not
representation of object classes.
tested.

The color and pattern of vehicle


The system needs
image series of traffic video
This system is proved to to be tested under
surveillance are used for tracking.
Seda Kul, work in different climatic extreme weather
Color and Pattern- It consists of segmentation of
[72] Süleyman Eken, conditions and is conditions and
Based Tracking foreground and background,
Ahmet Sayar insensitive to lighting occlusion problems
vehicle flow, shade removal,
conditions also need to be
vehicle velocity, vehicle count,
checked.
vehicle location to track objects.

2273
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

6 APPLICATIONS IEEE, pp. 4669–4672.


1. Monitoring Traffic conditions, pedestrian crossings, and [11] Shafie, A., Hafiz, F. and Ali, M. (2009). Motion Detection
parking areas. Techniques Using Optical Flow, World Academy of
2. Video surveillance for fire and smoke detection. Science, Engineering and Technology 56.
3. Counting and classification of vehicles. [12] Yokoyama, M. and Poggio, T. (2005). A Contour-Based
4. Monitoring vehicle speed. Moving Object Detection and Tracking, Visual
5. Observation of day to day activities in shopping centers and Surveillance and Performance Evaluation of Tracking and
amusement parks. Surveillance, 2005. 2nd Joint IEEE International
Workshop on, IEEE, pp. 271–276.
7 CONCLUSION [13] Zhang, P., Cao, T. and Zhu, T. (2010). A Novel Hybrid
This paper provides a detailed study of the various techniques Motion Detection Algorithm Based on Dynamic
that are used in traffic video surveillance. It focuses on various Thresholding Segmentation, Communication Technology
techniques of vehicle detection, classification and tracking to (ICCT), 2010 12th IEEE International Conference on,
make an efficient traffic management system by the use of IEEE, pp. 853–856.
video surveillance. Smart visual surveillance in dynamic [14] Girisha, R. and Murali, S. (2011). Tracking Humans using
scenes of various environmental conditions has been Novel Optical Flow Algorithm for Surveillance Videos,
considered where outdoor environment is more challenging for Proceedings of the Fourth Annual ACM Bangalore
researchers than indoor environment because of sunlight and Conference, ACM, pp: 7.
illumination changes, human behavior, pedestrians crossing, [15] Tzagkarakis, G., Charalampidis, P., Tsagkatakis, G.,
anomaly detection of vehicles, waving of trees, shadows, Starck, J.-L. and Tsakalides, P. (2012). Compressive
lightning, etc. The overall study gives a better understanding Video Classification for Decision Systems with Limited
and highlights the issues and solutions for traffic management Resources, Picture Coding Symposium (PCS), 2012,
systems. IEEE, pp. 353–356.
[16] Braillon, C., Pradalier, C., Crowley, J. L. and Laugier, C.
(2006). Real-Time Moving Obstacle Detection using
REFERENCES Optical Flow Models, Intelligent Vehicles Symposium,
[1] J. Heikkila and O. Silven. A real-time system for IEEE, pp. 466–471.
monitoring of cyclists and pedestrians. In Proc. of Second [17] Hrabar, S. and Sukhatme, G. S. (2004). A Comparison of
IEEE Workshop on Visual Surveillance, pages 74–81, Fort Two Camera Configurations for Optic-Flow Based
Collins, Colorado, June 1999. Navigation of a UAV Through Urban Canyons, Intelligent
[2] C. Stauffer and W. E. L. Grimson. Adaptive background Robots, and Systems, 2004. (IROS 2004). Proceedings.
mixture models for real-time tracking. In Proc. Computer 2004 IEEE/RSJ International Conference on, Vol. 3, IEEE,
Vision and Pattern Recognition, 2: 246–252, 1999. pp. 2673–2680.
[3] A. J. Lipton, H. Fujiyoshi, and R.S. Patil. Moving target [18] Hrabar, S., Sukhatme, G. S., Corke, P., Usher, K. and
classification and tracking from real-time video. In Proc. of Roberts, J. (2005). Combined Optic-Flow and Stereo-
Workshop Applications of Computer Vision, pages 129– Based Navigation of Urban Canyons for a UAV, Intelligent
136, 1998. Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ
[4] R. T. Collins et al. A system for video surveillance and International Conference on, IEEE, pp. 3309–3316.
monitoring: VSAM final report. Technical report CMU-RI- [19] Xiao, J., Cheng, H., Feng, H. and Yang, C. (2008). Object
TR-00-12, Robotics Institute, Carnegie Mellon University, Tracking and Classification in Aerial Videos, Proceedings
May 2000. of SPIE, the International Society for Optical Engineering,
[5] Y. Yuan, Z. Xiong, and Q. Wang, (2017)., ―An incremental Society of Photo-Optical Instrumentation Engineers, pp.
framework for video based traffic sign detection, tracking, 696711–1.
and recognition,‖ IEEE Trans. Intell. Transp. Syst., vol. 18, [20] Basavaiah, M. (2012).Development of Optical Flow Based
no. 7, pp. 1918–1929, Jul. 2017. Moving Object Detection and Tracking System on an
[6] Peng Chen, Yuanjie Dang, Ronghua Liang., (2018), Real- Embedded DSP Processor, Journal of Advances in
Time Object Tracking on a Drone With Multi-Inertial Computational Research: An International Journal 1(1-2).
Sensing Data, IEEE Transactions On Intelligent [21] Hu, W., Tan, T., Wang, L. and Maybank, S. (2004). A
Transportation Systems, VOL. 19, NO. 1, JANUARY Survey on Visual Surveillance of Object Motion and
2018. Behaviors, Systems, Man, and Cybernetics, Part C:
[7] Horn, B. and Schunck, B. (1981). Determining Optical Applications and Reviews, IEEE Transactions on 34(3):
Flow, Artificial intelligence 17(1): 185– 203. 334–352.
[8] Lucas, B. and Kanade, T. (1981). An Iterative Image [22] Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.
Registration Technique with an Application to Stereo and Szeliski, R. (2011). A Database and Evaluation
Vision, Proceedings of the 7th international joint Methodology for Optical Flow, An Interactive Review of
conference on Artificial intelligence Object Motion Detection, Classification… 785 International
[9] Denman, S., Fookes, C. and Sridharan, S. (2009). Journal of Computer Vision 92(1): 1–31.
Improved Simultaneous Computation of Motion Detection [23] Galvin, B., McCane, B., Novins, K., Mason, D. and Mills,
and Optical Flow for Object Tracking, Saleh Ali Alomari S. (1998). Recovering Motion Fields: An Evaluation of
Digital Image Computing: Techniques and Applications, Eight Optical Flow Algorithms, British Machine Vision
DICTA 2009., IEEE, pp. 175–182 Conference, Vol. 1, sn, pp. 195–204.
[10] Kim, J., Ye, G. and Kim, D. (2010). Moving Object [24] Shuigen, W., Zhen, C. and Hua, D. (2009). Motion
Detection Under FreeMoving Camera, Image Processing Detection Based on Temporal Difference Method and
(ICIP), 2010 17th IEEE International Conference on,
2274
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

Optical Flow Field, Electronic Commerce and Security, Manufacturing and Automation (ICDMA), 2011 Second
2009. ISECS’09. Second International Symposium on, International Conference on, IEEE, pp. 885–888.
Vol. 2, IEEE, pp. 85–88. [41] Witten, D. M. and Tibshirani, R. (2011). Penalized
[25] JM McHugh, J Konrad, V Saligrama, P Jodoin (2009), Classification using Fisher’s Linear Dis- criminant, Journal
Foreground-adaptive background subtraction. IEEE of the Royal Statistical Society: Series B (Statistical
Signal. Process Lett. 16, 390–393 . Methodology) 73(5): 753–772.
[26] CR Wren, A Azarbayejani, T Darrell, A Pentland, Pfinder: [42] Sonka, M., Hlavac, V. and Boyle, R. (1999). Image
real-time tracking of the human body. IEEE Trans. Pattern Processing, Analysis, and Machine Vision, PWS Pub.
Anal. Mach. Intell. 18(7), 780–785 (1997). [43] Han, F., Shan, Y., Cekander, R., Sawhney, H. and Kumar,
[27] A Elgammal, R Duraiswami, D Harwood, L Davis, R. (2006). A TwoStage Approach to People and Vehicle
Background and foreground modeling using Detection with Hog-Based SVM, Performance Metrics for
nonparametric kernel density for visual surveillance. Proc. Intelligent Systems Workshop in conjunction with the IEEE
IEEE 90(7), 1151–1163 (2002). Safety, Security, and Rescue Robotics Conference, pp.
[28] T Aach, A Kaup, (1995), Bayesian algorithms for adaptive 133–140.
change detection in image sequences using Markov [44] Ramakrishnan, V., Prabhavathy, A. K. and Devishree, J.
random fields. Signal Process Image Comm. Vol. 7, No. (2012). A Survey on Vehicle Detection Techniques in
(2), 147– 160. Aerial Surveillance, International Journal of Computer
[29] T Aach, A Kaup, R Mester, (1993)., Statistical model- Applications 55(18).
based change detection in moving video. Signal Process. [45] Chen, Z., Pears, N., Freeman, M. and Austin, J. (2009).
31, 165– 180. Road vehicle classification using support vector machines,
[30] Elham Kermani and Davud Asemani. (2014)., A robust Intelligent Computing and Intelligent Systems, 2009. ICIS
adaptive algorithm of moving object detection for video 2009. IEEE International Conference on, Vol. 4, IEEE, pp.
surveillance, EURASIP Journal on Image and Video 214–218.
Processing 2014, 2014:27, pp:2-9. [46] Asha, G., Kumar, K. A. and Kumar, D. D. N. P. (2012). A
[31] Roy, A.; Shinde,S.; Kang, K.-D. (2012); An Approach for Real Time Video Object Tracking Using SVM,
Efficient Real Time Moving Object Detection, International International Journal of Engineering Science and
Journal of Signal Processing, Image Processing and Innovative Technology (IJESIT).
Pattern Recognition, 5(3), 2012. [47] Cao, X., Wu, C., Yan, P. and Li, X. (2011). Linear SVM
[32] Cheng, H.-Y.; Weng, C.-C.; Chen Y.-Y.(2012); Vehicle Classification using Boosting HOG Features for Vehicle
Detection in Aerial Surveillance Using Dynamic Bayesian Detection in Low-Altitude Airborne Videos, Image
Networks, IEEE Transactions on Image Processing, 21(4): Processing (ICIP), 2011 18th IEEE International
2152- 2159, 2012. Conference on, IEEE, pp. 2421–2424.
[33] Philip, F.M.; Mukesh R.(2016); Hybrid tracking model for [48] Gallo, I. and Nodari, A. (2011). Learning Object Detection
multiple object videos using second derivative based Using Multiple Neural Netwoks, Proceedings of
visibility model and tangential weighted spatial tracking International Joint Conference on Computer Vision,
model, International Journal of Computational Intelligence Imaging and Computer Graphics Theory and Applications.
Systems, 9(5): 888-899, 2016. INSTICC Press.
[34] H. Bay, T. Tuytelaars, L. Van Gool, ―SURF: Speeded up [49] Park, J., Choi, H. and Oh, S. (2010). Real-Time Vehicle
robust features,‖ in Proc. Eur. Conf. Comput. Vis. (ECCV), Detection in Urban Traffic Using AdaBoost, Intelligent
2006, pp. 404–417. Robots and Systems (IROS), 2010 IEEE/RSJ
[35] X. Yang and K.-T. Cheng, (2014).,―Local difference binary International Conference on, IEEE, pp. 3598–3603.
for ultrafast and distinctive feature description,‖ IEEE [50] Zohrevand, A.; Ahmadyfard, A.; Pouyan, A.; Imani, Z.
Trans. Pattern Anal. Mach. Intell., vol. 36, no. 1, pp. 188– (2014); A SIFT based object recognition using contextual
194, Jan. 2014 information, Iranian Conference on Intelligent Systems
[36] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, ―BRIEF: (ICIS), 1-4, 2014.
Binary robust independent elementary features,‖ in Proc. [51] Li, Y.; Su G. (2015)., Simplified histograms of oriented
Eur. Conf. Comput. Vis. (ECCV), 2010, pp. 778–792. gradient features extraction algorithm for the hardware
[37] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, ―ORB: implementation, International Conference on Computers,
An efficient alternative to SIFT or SURF,‖ in Proc. Int. Communications and Systems (ICCCS), 192 -195, 2015.
Conf. Comput. Vis. (ICCV), Barcelona, Spain, Nov. 2011, [52] G. Jemilda, S. Baulkani, (2018)., Moving Object Detection
pp. 2564–2571. and Tracking using Genetic Algorithm Enabled Extreme
[38] Zakaria Moutakki, Imad Mohamed Ouloul, Karim Afde, Learning Machine, International Journal Of Computers
Abdellah Amghar, (2018)., "Real-Time System Based On Communications & Control Issn 1841-9836, 13(2), 162-
Feature Extraction For Vehicle Detection And 174, April 2018.
Classification", Transport and Telecommunication, 2018, [53] Shingade, A.; Ghotkar A.(2014); Survey of Object
volume 19, no. 2, 93–102 . Tracking and Feature Extraction Using Genetic Algorithm,
[39] Lai, J., Huang, S. and Tseng, C. (2010)., Image-Based International Journal of Computer Science and
Vehicle Tracking and Classification on the Highway, Technology, 5(1), 2014.
Green Circuits and Systems (ICGCS), 2010 International [54] D. Kleyko, R. Hostettler, W. Birk, E. Osipov, "Comparison
Conference on, IEEE, pp. 666–670. of Machine Learning Techniques for Vehicle Classification
[40] Liu, X., Dai, B. and He, H. (2011). Real-Time On-Road Using Road Side Sensors", 2015 IEEE 18th Int. Conf.
Vehicle Detection Combining Specific Shadow Intell. Transp. Syst., pp. 572-577, 2015.
Segmentation and SVM Classification, Digital [55] Z. Chen, T. Ellis and S. A. Velastin "Vehicle type

2275
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 10, OCTOBER 2019 ISSN 2277-8616

categorization: A comparison of classification schemes", vol.1.


14th IEEE Annual Conference on Intelligent [70] M. Xiaoxu and W. E. L. Grimson, "Edge-based rich
Transportation Systems, the George Washington representation for vehicle classification," in Computer
University, Washington, DC, USA. pp. 74-79, Oct. 5-7, Vision, 2005. ICCV 2005. Tenth IEEE International
2011. Conference on, 2005, pp. 1185-1192 Vol. 2.
[56] M. A. Manzoor, Y. Morgan ,"Vehicle Make and Model [71] J.-W. Hsieh, et al., "Automatic traffic surveillance system
Classification System using Bag of SIFT Features", 7th for vehicle tracking and classification," Intelligent
IEEE Annual Conference on Computing and Transportation Systems, IEEE Transactions on, vol. 7, pp.
Communication Workshop and Conference (CCWC), Las 175-187, 2006.
Vegas, NV, USA. pp. 572 577, 02 March 2017. [72] H. Mao-Chi and Y. Shwu-Huey, "A real-time and color-
[57] A.H.S. Lai , G.S.K. Fung , N.H.C. Yung, "Vehicle Type based computer vision for traffic monitoring system," in
Classification from Visual-Based Dimension Estimation", Multimedia and Expo, 2004. ICME '04. 2004 IEEE
Intelligent Transportation Systems, 2001. Proceedings. International Conference on, 2004, pp. 2119-2122 Vol.3.
2001 IEEE, Oakland, CA, USA, pp. 201-206, 25-29 Aug. [73] G. D. Sullivan, et al., "Model-based vehicle detection and
2001. classification using orthographic approximations," Image
[58] S. Kul, S. Eken, and A. Sayar, ―Distributed and and Vision Computing, vol. 15, pp. 649-654, 1997.
collaborative realtime vehicle detection and classification [74] Sheldon Xu and Anthony Chang, Robust Object Tracking
over the video streams,‖ Int. J. Adv. Robot. Syst., vol. 14, Using Kalman Filter with Dynamic Covariance, Cornell
no. 4, p. 172988141772078, Jul. 2017. University.
[59] Z. Dong, Y. Wu, M. Pei, and Y. Jia, ―Vehicle Type [75] Dorin Comaniciu, Peter Meer. ―Mean Shift: A Robust
Classification Using a Semisupervised Convolutional Approach Toward Feature Space Analysis,‖ IEEE
Neural Network,‖ IEEE Trans. Intell. Transp. Syst., vol. 16, Transaction on Pattern Analysis and Machine Intelligence,
no. 4, pp. 2247–2256, Aug. 2015. Vol 24(5),2002, pp:603-619.
[60] F. Porikli and A. Yilmaz, "Object Detection and Tracking," [76] Zhang Hongzhi, Zhang Jinhuan, Yue Hui, Huang Shilin,
in Video Analytics for Business Intelligence. vol. 409, C. ―Object tracking algorithm based on camshift‖, Computer
Shan, et al., Eds., ed: Springer Berlin Heidelberg, 2012, Engineering and design,Vol 27(11), 2006, pp:2012-2014.
pp. 3-41. [77] Youssef ZINBI, Youssef CHAHIR, S.―Moving object
[61] S. Rhee, et al., "Vehicle Tracking Using Image Processing segmentation using optical flow with active contour
Techniques," in Rough Sets and Current Trends in model‖. IEEE Conference on ICTTA, 2008,pp. 1-5.
Computing. vol. 3066, S. Tsumoto, et al., Eds., ed: [78] P. Ondr´uˇska and I. Posner, (2016)., ―Deep tracking:
Springer Berlin Heidelberg, 2004, pp. 671-678. Seeing beyond seeing using recurrent neural networks,‖ in
[62] S. Gupte, et al., "Detection and classification of vehicles," The Thirtieth AAAI Conference on Artificial Intelligence
Intelligent Transportation Systems, IEEE Transactions on, (AAAI), Phoenix, Arizona USA, February 2016.
vol. 3, pp. 37-47, 2002. [79] P. Ondr´uˇska, J. Dequaire, D. Z. Wang, and I. Posner,
[63] L. Jin-Cyuan, et al., "Image-based vehicle tracking and (2016)., ―End-to-end tracking and semantic segmentation
classification on the highway," in Green Circuits and using recurrent neural networks,‖ arXiv preprint
Systems (ICGCS), 2010 International Conference on, arXiv:1604.05091, 2016.
2010, pp. 666-670. [80] Jing Xin, Xing Du, Jian Zhang (2017), Deep Learning For
[64] D. Koller, et al., "Towards robust automatic traffic scene Robust Outdoor Vehicle Visual Tracking, Proceedings of
analysis in real-time," in Decision and Control, 1994., the IEEE International Conference on Multimedia and
Proceedings of the 33rd IEEE Conference on, 1994, pp. Expo (ICME) 2017.
3776-3781 vol.4. [81] W. Zhang, et al., "Moving vehicles detection based on
[65] A. Ambardekar, et al., "Efficient Vehicle Tracking and adaptive motion histogram," Digit. Signal Process., vol.
Classification for an Automated Traffic Surveillance 20, pp. 793-805, 2010.
System," in International Conference on of Signal and [82] W. Tao and Z. Zhigang, "Real time moving vehicle
Image Processing, 2008, pp. 1-6. detection and reconstruction for improving classification,"
[66] R. Rad and M. Jamzad, "Real time classification and in Applications of Computer Vision (WACV), 2012 IEEE
tracking of multiple vehicles in highways," Pattern Workshop on, 2012, pp. 497-502.
Recognition Letters, vol. 26, pp. 1597-1607, 2005. [83] C. Yen-Lin, et al., "Real-time vision-based multiple vehicle
[67] N. H. C. Yung and A. H. S. Lai, "Detection of vehicle detection and tracking for nighttime traffic surveillance," in
occlusion using a generalized deformable model," in Systems, Man and Cybernetics, 2009. SMC 2009. IEEE
Circuits and Systems, 1998. ISCAS '98. Proceedings of International Conference on, 2009, pp. 3352-3358.
the 1998 IEEE International Symposium on, 1998, pp.
154-157 vol.4.
[68] F. Bardet, et al., "Unifying real-time multi-vehicle tracking
and categorization," in Intelligent Vehicles Symposium,
2009 IEEE, 2009, pp. 197-202. Signal & Image
Processing : An International Journal (SIPIJ) Vol.5, No.1,
February 2014 12
[69] K. ZuWhan and J. Malik, "Fast vehicle detection with
probabilistic feature grouping and its application to vehicle
tracking," in Computer Vision, 2003. Proceedings. Ninth
IEEE International Conference on, 2003, pp. 524-531

2276
IJSTR©2019
www.ijstr.org

You might also like