0% found this document useful (0 votes)
64 views29 pages

realtime suraj ppt

PPT

Uploaded by

surajpatan20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views29 pages

realtime suraj ppt

PPT

Uploaded by

surajpatan20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

REAL-TIME ACCIDENT DETECTION SYSTEM IN

TRAFFIC SURVEILLANCE USING DEEP LEARNING


PATAN SURAJ KHAN (222W1D5803)
BY
DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING

Under the guidance of


CH PAPA RAO, M-Tech., (Ph.D.)
Associate Professor

GVR&S College of Engineering & Technology


Budampadu (V), Guntur(D) , Andhra Pradesh-522013
ABSTRACT
 The study aims to develop an innovative framework for automated traffic accident detection at
urban intersections using surveillance cameras and computer vision. The framework includes
four hierarchical phases:

1. Object Tracking and Association: Utilizes a Kalman filter and the Hungarian algorithm,
addressing obstructions, overlapping objects, and changes in shape.
2. Trajectory Conflict Analysis: Examines object trajectories, considering speed, angle, and
distance to identify various conflicts involving autos, individuals, and bicycles.
3. Advanced YOLOv4 Technique: Applies advanced YOLOv4 for accurate and efficient object
detection.
4. Efficient and Accurate Object Detection: Completes the process with a focus on low false
alarm rates and high detection rates.
 Experiments using real traffic video footage demonstrate the framework's potential for real-
time traffic monitoring, highlighting its effectiveness in accurately identifying trajectory
conflicts.
INTELLIGENT TRANSPORTATION SYSTEM
An intelligent transport system (ITS) integrates advanced data and communications
networks for users ,roadways, and vehicles. Describe ITS as using sophisticated sensors,
hardware, computers, and communication systems to improve transportation efficiency and
safety while providing vital information to passengers. Various communication technologies
within an ITS are interdependent.
MOTIVATION
 Because of its many and long-lasting uses, the need for a sophisticated monitoring system is
on the rise. Still, a complicated environment and a large amount of video data progressively
reduce the automated methods that are quick, reliable, and focused are required for a good
tracking system to recognize and follow moving objects detection.
 Despite several works on the topic of tracking and identifying moving objects, a number of
contextual factors impact object extraction, making it difficult to detect powerful moving
objects. Background removal and the optical flow-based technique.

 In addition, optical flow-based methods aren't great when used in places with a lot of
background noise or other disturbances. Since they give visual complexity that may be
investigated, Moving Object Detection algorithms are highly recommended to be investigated
in the lab in several difficult scenarios. To improve the performance of the video surveillance
system, this study offers two distinct methods for identifying moving Vehicles.
LITERATURE SURVEY

 A substantial and expanding field of research in video surveillance is the monitoring and
categorization of vehicles while they are in motion. Artificial Neural Networks (ANNs) to
solve the challenge at hand. We can put a few of them in this spot. Moving Vehicle Detection
was provided an improved neural network.
 Using the backdrop subtraction method, the system collects the HoG feature from every item
and removes the background from every video. In order to determine whether a picture shows
a car (Vehicle) or not, an ANN classifier examines the retrieved characteristics. Oppositional
Gravitational Algorithm (OGA) was used to best determine the weight values in order to
improve the ANN's performance.

 A technique for automated traffic monitoring involves combining Principal Component


Analysis (PCA) with a Radial Basis Function (RBF) network to recognize objects in motion.
Both high- and low-quality video streams are used to test this strategy for moving vehicle
detection. First of its kind, this research uses band-restricted video captured at different bit
rates to test a traffic surveillance system's motion detection capabilities. Improvements in
transportation security are the driving force for this inquiry.
REVIEW ARTICLES FROM DEEP LEARNING ALGORITHM
BASED MOVING VEHICLE DETECTION
 By learning features and classifications directly from the input picture, deep learning aims to
achieve end-to-end learning. Image. Here we take a closer look at how moving vehicle
detection using deep learning techniques works. To enable the tracking and classification of a
large number of cars, developed a system based on CNN classifiers. Training efficiency and
the precision of vehicle categorization predictions are both enhanced as a result. The initial
set of findings shows that it was feasible to track and classify a large number of vehicles
while also determining their speeds.
 An integrated machine learning mechanism had to be designed into the digital video camera
system so that it could watch and distinguish between many lanes of vehicles at the same time
separating the exciting zone (the moving vehicle).

 Accurate tracking and recognition of moving vehicles is a key component of image processing
and AI research. One common approach was to use computer vision methods; in this case, a
Convolutional Neural Network (CNN) was trained to improve the accuracy of route violation
detection.
MOVING VEHICLE DETECTION SYSTEM USING
OPTIMAL PROBABILISTIC NEURAL NETWORKS

 Smart reconnaissance is increasingly important for transportation network safety and security.
Automatic monitoring systems handle tasks like event detection, object identification,
tracking, motion segmentation, and behavior analysis. Developing effective object tracking
algorithms is challenging due to issues like occlusion, lighting changes, position variations,
rapid motions, and background clutter. Advances in this field have led to significant results,
benefiting areas such as navigation, human-computer interaction, video surveillance, and
motion analysis. Vehicle detection on roads is crucial for preventing traffic jams and is
essential for various applications in surveillance and image interpretation.

 To improve moving vehicle detection, it's crucial to adjust learning parameters, especially as
cars may merge to avoid traffic jams. While complex parameters are rarely used, they can be
trained using advanced machine learning techniques. By analyzing cars' visual characteristics,
object detection can identify key image components. This chapter introduces a novel method
using optimal probabilistic neural networks for accurate vehicle detection.
CUCKOO SEARCH ALGORITHM
 The Cuckoo Search (CS) optimization algorithm is inspired by cuckoos' behavior of laying
eggs in other birds' nests. These eggs mimic the host's eggs to avoid detection. If detected, the
host birds destroy the foreign eggs. The algorithm improves as the survival rate of these eggs
increases. In CS, a cuckoo randomly selects a nest for its egg, and the best nest with the
highest-quality eggs is kept for the next generation. The probability PPP indicates the
likelihood of the host detecting the foreign egg. The algorithm limits the number of nests, and
host birds can either abandon their eggs or move to new nests.

 Following these basic cuckoo search guidelines, the cuckoo will randomly choose a nest to
deposit its egg in. The ideal nest containing the most important host egg type is maintained for
the next generation. P is a number between zero and one that represents the likelihood that the
host will find eggs deposited by an outsider. Also, there is no change to the maximum number
of host nests that may be visited. The birds who are hosting the nest have two options: either
leave the eggs behind or find a new place to build their nest.
PROPOSED MOVING VEHICLE DETECTION USING OCS-
PNN MODEL

 This emphasizes using an innovative


OCS-PNN model for identifying and
categorizing moving cars, crucial for the
Moving Vehicle Detection (MVD) system.
 This system supports smart mobility
infrastructure and traffic monitoring by
accurately identifying mobile vehicles.
The OCS-PNN model consists of two
main parts: generating backgrounds and
identifying moving vehicles, as shown .
 More accurate identification of mobile
vehicles is made possible by this
technology.
GRASSHOPPER OPTIMIZATION ALGORITHM
 Grasshopper Optimization is a biologically
inspired method that is based on the
swarming behavior of grasshoppers. There
are three parts to a grasshopper's life
cycle: the egg, the nymph, and the adult.
The middle of While in its nymphal
swarm stage, the grasshopper travels
slowly, but as an adult, it takes big steps
and goes quickly.

 Exploration and manipulation are laid out


by an algorithm that aims to optimize the
pace at which it achieves a desired
conclusion. The insect flies swiftly over
the extraction area and often has the
opportunity to abruptly change its
trajectory at the detection level.
PROPOSED MOVING VEHICLE DETECTION SYSTEM
 To address the challenge of vehicle
identification a two-step system can be
devised:
1. Hypothesis Generation (HG):
Generate hypotheses or potential
candidate vehicles based on visual cues
from the camera data. This involves
detecting shapes, colors, and motion
patterns that resemble vehicles.
2. Hypothesis Verification (HV):
Evaluate and verify the generated
hypotheses to accurately classify them
as either vehicles or non-vehicles. This
phase involves more detailed analysis
and possibly using advanced algorithms
for object recognition and classification.
 Using a classification and feature
extraction technique, the hypotheses
recovered in the HG stage are evaluated
for their status as vehicles in the HV stage.
 The primary objective of this proposed
system is to enhance road safety by
reliably identifying nearby cars using a
camera mounted on a moving vehicle.
Figure outlines a detailed process for
implementing this vehicle identification
system, ensuring thorough detection and
classification of vehicles in road picture.
 As a whole, testing theories is here are
three steps to verifying a hypothesis are:-
(i) Extracting features,
(ii) Training OANNs,
(iii)Using the verification approach
FEATURE EXTRACTION
 The process of constructing feature vectors using Histogram of
Oriented Gradients (HOG) descriptors for vehicle images:
1. Image Division: Images from dataset D are divided into
smaller cells.
2. Gradient Orientation Histograms: HOG computes gradient
orientations for each pixel within these cells.
3. BlockConstruction: Histograms are normalized and
combined within larger blocks of cells.
4. Block Histograms: Histograms for each block are generated.
5. Block Combination: This process is repeated across
overlapping blocks within the image.
6. Feature Vector Formation: Histograms from all blocks are
combined into a final feature vector, capturing the unique
characteristics of each vehicle image based on gradient
orientations.
TRAINING USING OANN
 The training the Object-Aware Neural Network
(OANN) classifier using newly-generated
HOG feature vectors. Two databases are used
for this purpose: one for training and the other
for testing and hypothesis formulation. The
first dataset, is used to train the classifier, while
the second dataset contains actual traffic
incidents.
 During training, the retrieved features are input
into an artificial neural network (ANN). The
classifier analyzes these features to determine
if they represent a vehicle or not. The ANN is
enhanced using the Grasshopper Optimization
Algorithm (GOA), which fine-tunes the neural
network weights, resulting in the Object-Aware
Neural Networks (OANN). The primary goal
of OANN is to classify input features as
vehicles or non-vehicles.
ROAD-USER DETECTION
 Using YOLOv4 for object detection in
video and image analytics, particularly to
identify and categorize road users.
YOLOv4, known for its efficiency and
performance, divides images into grid
cells for object detection, calculating
confidence scores by multiplying the
intersection over union (IOU) with item
probabilities.
 Starting with the CSPDarknet53 model for
feature extraction, it incorporates path
aggregation network (PANet) and spatial
attention modules, and uses dense
prediction blocks for bounding box
localization and classification. Trained on
dataset, YOLOv4 excels even on aerial
images. The focus is on analyzing
trajectory conflicts at urban intersections
involving cars, pedestrians, and bicycles.
ROAD-USER TRACKING
 Multiple object tracking (MOT) in video
analytics. Using a technique based on
SORT, it employs the Hungarian method to
connect bounding boxes between frames.

 The Kalman filter predicts future positions


of detected objects, improving association,
providing continuous trajectories, and
anticipating track absences.

 A linear velocity model helps determine


inter-frame displacement of each object.
The Kalman filter tracking approach
includes various objective states to achieve
these tracking goals
ACCIDENT DETECTION
 Identifying conflicts between road users (bicyclists, pedestrians, and cars) at intersections,
which help improve traffic management systems. It emphasizes the importance of near-
accidents in providing insights into junction design or signal management issues. The system
identifies potential conflicts by checking if the Euclidean distances between object pairs are
below a certain threshold and analyzing their velocity and trajectory over multiple video
frames. Average coordinates of bounding box centers are calculated for different frame halves.
Camera calibration involves selecting points in the image and corresponding Google Maps
locations, translating them into real-world coordinates using the inverse holography matrix
H1.
VEHICLE-TO-VEHICLE (V2V) TRAFFIC ACCIDENTS AT
INTERSECTIONS DETECTED BY OUR PROPOSED FRAMEWORK.
THE RED CIRCLES INDICATE THE LOCATION OF THE INCIDENTS
DIFFERENT TYPES OF CONFLICTS DETECTED AT THE INTERSECTIONS. (A)
VEHICLE TO-VEHICLE (V2V) NEAR-ACCIDENT, (B) VEHICLE-TO-BICYCLE (V2B)
NEAR-ACCIDENT,(C) AND (D) VEHICLE-TO- PEDESTRIAN (V2P) ACCIDENT
COMPARATIVE RESULTS BASED ON OCS-PNN MODEL

 The OCS-PNN model, used for identifying moving cars, with other techniques such as
PNN+CS, Bo-Hoo, and Shin. It emphasizes the effectiveness of combining PNN and OCS for
motion detection, noting differences in optimization methods compared to PNN+CS. Bo-Hoo
and Shin's work discusses Moving Object Extraction without exclusive use of the PNN model
for optimization.
 The comparison evaluates performance using metrics like precision, recall, F-measure, and
similarity, illustrated to highlight the superiority of the proposed OCS-PNN model in motion
detection.
PERFORMANCE OF MOTION DETECTION USING

PRECISION MEASURE RECALL MEASURE


PERFORMANCE OF MOTION DETECTION USING

SIMILARITY F-MEASURE
COMPARATIVE ANALYSIS OF PROPOSED METHODOLOGY USING
VIDEO 1,VIDEO 2,VIDEO 3
RESULT
CONCLUSION
This study introduces a novel framework for automatically recognizing accidents and
near-accidents at traffic crossings. The framework comprises three primary modules:

Object recognition using the YOLOv4 method


Tracking using the Kalman filter and the Hungarian algorithm with a unique cost
function
Accident detection by analyzing obtained trajectories for anomalies.

The robust tracking approach handles occlusion, overlapping objects, and shape
changes. The analysis of routes monitors the travel patterns of road users based on their
position, velocity, and movement direction, using heuristic signals to identify potential traffic
accidents. The framework's effectiveness is validated using a collection of traffic recordings
that depict accidents or near-accidents. Experimental assessments show the system's
success in real-time traffic control applications.
REFERENCES
 [1.] H. Shi and C. Liu, “A new foreground segmentation method for video analysis in different

color spaces,” in 24th International Conference on Pattern Recognition, IEEE, 2018.


 [2.] G. Liu, H. Shi, A. Kiani, A. Khreishah, J. Lee, N. Ansari, C. Liu, and M. M. Yousef, “Smart

traffic monitoring system using computer vision and edge computing,” IEEE Transactions on
Intelligent Transportation Systems, 2021.
 [3.] H. Ghahremannezhad, H. Shi, and C. Liu, “Automatic road detection in traffic videos,” in 2020

IEEE Intl Conf on Parallel & Distribu ted Processing with Applications, Big Data & Cloud
Computing, Sustainable Computing & Communications, Social Computing & Networking
(ISPA/BDCloud/SocialCom/SustainCom), pp. 777–784, IEEE, 2020.
 [4.] H. Ghahremannezhad, H. Shi, and C. Liu, “A new adaptive bidirectional region-of-interest

detection method for intelligent traffic video analysis,”in 2020 IEEE Third International
Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 17–24, IEEE, 2020.
 [5.] H. Ghahremannezhad, H. Shi, and C. Liu, “Robust road region extraction in video under various

illumination and weather conditions,” in 2020 IEEE 4th International Conference on Image Processing,
Applications and Systems (IPAS), pp. 186–191, IEEE, 2020.
 [6.] H. Shi, H. Ghahremannezhadand, and C. Liu, “A statistical modeling method for road recognition in

traffic video analytics,” in 2020 11th IEEE International Conference on Cognitive Infocommunications
(CogInfoCom), pp. 000097–000102, IEEE, 2020.
[7.] H. Ghahremannezhad, H. Shi, and C. Liu, “A real time accident detection framework for traffic

video analysis,” in Machine Learning and Data Mining in Pattern Recognition, MLDM,
 [8.] pp. 77–92, ibai publishing,Leipzig, 2020.
 [9.] M. O. Faruque, H. Ghahremannezhad, and C. Liu, “Vehicle classification in video using deep

learning,” in Machine Learning and Data Mining in Pattern Recognition, MLDM, pp. 117– 131, ibai
publishing, Leipzig,2019.
 [10.] H. Ghahremannezhad, H. Shi, and C. Liu, “A new online approach for moving cast shadow

suppression in traffic videos,” in 2021 IEEE International Intelligent Transportation Systems Conference
(ITSC),pp. 3034–3039, IEEE, 2021.
NOMENCLATURE
 YOLO You Only Look Once
 MVD Moving Vehicle Detection
 HOG Histograms of Oriented Gradients
 MAE Mean Absolute Average
 CNN Convolutional Neural Network
 PNN Probabilistic Neural Network
 PCA Principal Component Analysis
 SGD Stochastic Gradient Descent
 GOA Grasshopper Optimization Algorithm
 OGA Oppositional Gravitational Algorithm
 OANN Object Aware Neural Network
 OCS Oppositional Cuckoo Search
THANK YOU -- PATAN SURAJ KHAN

Any Queries ?

You might also like