Sample Project Document - Phase-1
Sample Project Document - Phase-1
By
SCHOOL OF COMPUTING
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with Grade “A++” by NAAC
JEPPIAAR NAGAR, RAJIV GANDHISALAI,
CHENNAI - 600119
NOVEMBER - 2023
i
SATHYABAMA
INSTITUTE OF SCIENCE AND TECHNOLOGY
(DEEMED TO BE UNIVERSITY)
Accredited with - A‖ grade by NAAC
Jeppiaar Nagar, Rajiv Gandhi Salai, Chennai – 600 119
www.sathyabama.ac.in
BONAFIDE CERTIFICATE
This is to certify that this Project Report is the bonafide work of Anusha S(3911000)
and Subiksha E(3911001) who carried out the Project Phase-1 entitled “RTSC- DVM:
A NOVEL METHODOLOGY FOR REAL-TIME SIREN CALLTO HALT DISTRACTED
VEHICLE MISHAPS” under my supervision from June 2023 to November 2023.
Internal Guide
Dr. B. ANKAYARKANNI M.E., Ph.D
InternalExaminer ExternalExaminer
ii
DECLARATION
I, Anusha S(Reg.No- 3911001), hereby declare that the Project Phase-1 Report entitled
RTSC-DVM: A NOVEL METHODOLOGY FOR REAL-TIME SIREN CALL TO
HALT DISTRACTED VEHICLE MISHAPS” done by me under the guidance of Dr.
B. Ankayarkanni, M.E.,Ph.D is submitted in partial fulfillment of the requirements
for the award of Bachelor of Engineering degree in Computer Science and
Engineering.
iii
ACKNOWLEDGEMENT
I convey my thanks to Dr. T.Sasikala M.E., Ph. D, Dean, School of Computing, Dr. L.
Lakshmanan M.E., Ph.D., Head of the Department of Computer Science and Engineering
for providing me necessary support and details at the right time during the progressive
reviews.
I would like to express my sincere and deep sense of gratitude to my Project Guide
Dr.B.Ankayarkanni M.E.,Ph.D,for her valuable guidance, suggestions and constant
encouragement paved way for the successful completion of my phase-1 project work.
I wish to express my thanks toall Teaching and Non-teaching staff members of the
Department of Computer Science and Engineering who were helpful in many ways
for the completion of the project.
iv
ABSTRACT
v
Chapter
No
TITLE Page No.
vi
ABSTRACT v
1 INTRODUCTION 1
LITERATURE SURVEY
2
2.1 Inferences from Literature Survey
3 REQUIREMENTS ANALYSIS
3.1 Feasibility Studies/Risk Analysis of the Project
3.2 Software Requirements Specification Document
REFERENCES
vii
FIGURE FIGURE NAME Page No.
NO
4.1 A representation of a driver drowsiness system using Raspberry Pi 10
LIST OF FIGURES
CHAPTER 1
INTRODUCTION
In recent years, ansurge in the demand for modern transportation simultaneously demands a
faster car safety growth. At present, the automobile is the most essential mode of
transportation for people. Although it has changed people‘s lifestyle and improved the
convenience of conducting daily activities, it is also associated with numerous negative
side-effects, such as road mishaps and traffic due to distraction, fatigue, and exhaustion.
These are significant and latent dangers responsible for much loss of lives. In recent years,
scientists have been trying to prevent any further loss by pre-emptively spotting such
symptoms well in advance. These recognizing methods are characterizedas subjective and
objective detection. In the subjective detection method, a driver must participate in the
evaluation, which is associated with the driver‘s subjective perceptions through steps such
as self- questioning. Then, these data are used to estimate the danger of the vehicles being
driven by exhausted drivers, assisting them to plan their schedules accordingly. However,
their feedback is not required in the objective detection method as it monitors their
physiological state and driving-behavior characteristics in real time. The collected data are
used to evaluate the driver‘s level of fatigue. Furthermore, objective detection is categorized
into two: contact and non-contact. Compared with the contact method, noncontact is
cheaper and more convenient because it only requires Computer Vision technology with
sophisticated camera which allows the use of the device in large numbers. Owing to easy
installation and low cost, the noncontact method has been widely used for our problem
statement. For instance, Attention Technologies and Smart Eye observe the movement
of the driver‘s eyes and position of the driver‘s head to determine the level of their fatigue.
In this study, we propose a non-contact method to detect the level of the driver‘s fatigue.
Our method employs the use of only the vehicle- mounted camera, making it unnecessary
for the driver to carry any on/in-body devices. Our design uses each frame image to
analyze and detect the driver‘s concentration state.
Driver tiredness discovery may be a car security innovation which avoids mishaps
1
when the driver is getting lazy. Different considerations have been proposed that around
20% of all street mishaps are fatigue-related, up to 50% on certain streets. Driver weakness
could be a critical reason in the large number of vehicle mishaps. Later measurements
assess that yearly 1,200 passing's and 76,000 wounds can be credited to weariness related
crashes. The improvement of technologies for recognizing or avoiding laziness at the wheel
could be a major challenge within the field of accident evasion frameworks. Due to the risk
that laziness or fatigue presents on the street, strategies have to be shaped for neutralizing
its influences. Both driver tiredness and diversion, in any case, might have the same
impacts, i.e., diminished driving execution, longer response time, and an expanded hazard
of crash inclusion. Based on Procurement of video from the camera that's before driver, it
performs real-time preparing of an approaching video stream in order together the driver‘s
level of weariness on the off chance that the laziness is estimated at that point it'll deliver
the caution by detecting the eyes, mouth and headposture.
2
CHAPTER 2
LITERATURE SURVEY
This problem statement has been extensively studied over the past 5 years by researchers
and automotive companies in a bid to create a solution, and all their solutions vary from
analyzing various patterns of distractive habits to analyzing health vitals of the driver.
The work of Dr. K.S. Tiwari et al [1] introduced an eye-blink monitoring system and also
provided a buzzer to alert the driver of his condition. Whereas the research paper of
CeerthiBala et al [10] proposed a system to alert the traffic department when distraction
isperceived.
Some studies were conducted using neuro cognitive information, especially, through EEG,
and it has been used to show differences in brain dynamics when there occurs a change in
alertness during driving. In Jap et all‘s research paper, we can understand their researches
addressing drowsiness and fatigue detection using EEG, which showed that the ratio of
slow to fast EEG waves were increased when the subject, in our case, the driver was
distracted or influenced byfatigue.
Gharagozlou et al. suggested in his research paper that other different levels of fatigue and
distraction can be estimated using band power features and EEG signal entropy features,
showing a notable increase in alpha power corresponding to driverfatigue.
Rateb Jabbar et al [2] suggested that accuracy of detecting sleepiness increased byusing
facial landmarks with a Convolution Neural Network (CNN). In J Hu and J Min‘s paper,
entropy features were used to combine with Gradient Boosting Decision Tree Model. Deep
learning models for fatigue classification were proposed, as in H. Zeng et all‘s work,
through a Residual Convolution Neural Network (EEG-Conv-R), using data collected from
10 healthy subjects over 16channels.
3
In P.P. San‘s research paper, a combination of a deep neural network with support vector
machine (SVM) classifier at the last layer was also proposed. Other previous works are
based on intra-subject approaches. Some cross- subject approaches have also been
proposed by combining EEG samples from all subjects, followed by splitting them into
training and testing randomly, like H. Zeng et all. This approach is naturally random and
so it ends up mixing some training subjects‘ samples with the testing ones, which is not
cross-subject.
In Y.Liuet all‘s research paper, the authors perform domain adaptation, a branch of
transfer learning, to adapt the data distributions of source and target so that the
classification could be more efficient in a cross- subjectscenario.
Md. Yousuf Hossain et al [3] proposed a non-intrusive system using the eye closure ratio
as the input parameter. In Y.Liuet all‘s paper, EEG features, statistics, higher order
crossing, fractal dimension, signal energy, and spectral power were extracted and
combined with several classifiers, such as logistic regression,
lineardiscriminateanalysis,1-nearestneighbor,linearSVM,andnaïveBayes.
Mika Sunagawa et al [4] proposed a model that was accurately capable of sensingthe
entire range of stages of distraction, from weak to strong.
Wang et all., N. Hatami et all., and Z. Zhao et all.,‘s methodologies used (recurrence
plots and gramian angular fields), they have been successfully applied in computer vision
algorithms combined with deep learning, these have been used in recent works in the
EEG research domain, but still are relatively unexplored.
4
Vector Machines (SVMs), Convolutional Neural Networks (CNNs) and Hidden Markov
models in this context. The work of Naveen SenniappanKaruppusamy et al [8]
suggested an electroencephalography-based sleepiness detection system (ESDS) with
accuracy of93.91%.
Yaocong Hu et al [8] proposed a new deep learning framework based onthe hybrid of 3D
conditional generative adversial network and two-level attention bidirectional long short-
term memory network. It was proposed for robust recognition aimed at extracting short-
term spatial-temporal features designed as a 3D encoder-decoder gene generator with the
condition of auxiliary information to generate high-quality fake image sequences and a
3D discriminator was devised to learn drowsinessrelated representation from spatial-
temporal domain. In addition, for long-term spatial-temporal fusion, they investigated the
use of two-level attention mechanism to guide the bidirectional long short-term memory
to learn the saliency of shortterm memory information and long-term temporal
information.
7
CHAPTER 4
Looking at the disadvantages of all the above methodologies used in previous systems, the
most common point that pops up is most of these systems implemented this problem
statement using only a pre-defined dataset of faces with closed eyes and opened eyes. Also
they had only visual types of alerting system to inform the driver of his state, which is not
an effective alarm because visual alarms require one to be alert to see the alarm, which
defeats the whole purpose at hand. Also, some of these systems’ response time between
finding the state of the driver and alerting the driver of his state was found to be too long to
prevent mishaps in time. Some systems were found to be too sensitive to the eye blinks and
yawns and other systems gave alarms continuously for a long time resulting in spamming
the system. Our system aims to overcome all these issues with the previously existing
systems and address these issues while giving the best accuracy in the results. Our basic
idea is to monitor the physical state of the driver while he‘s driving using a live camera. We
are making use of facial parameters to track the retina in the eye of the beholder, in case of
frequent eye blinks while the driver is tired and also keep track of their mouth movements
in case of yawning. When our system detects either of these changes our model will
immediately emit an alarm sound as loud as a siren alarm to immediately awaken the driver
from his poor state back toalertness.
8
Fig 4.1: A Representation of a Driver Drowsiness System using Raspberry Pi
9
Fig 4.2: System Architecture for Drowsiness Judgment
The block diagram of the proposed system has been shown in the above figures. The camera
captures the image of the person inside the car and sends that to the HOG model to train and
it detects each feature from the face using facial landmark technique in the system. The next
step in the process would be as it starts to find the position and condition of each feature to
analyze, it should detect whether the person is sleeping or not. If any of the features
especially the eye and the head pose/state of the person are detected to be abnormal then
automatically the systemstartstoproducethesirensound.Thisiswherewewillgetthefinal
judgment of the state of the driver, whether he is concentrated or distracted.
10
4.3 DESCRIPTION OF SOFTWARE FOR IMPLEMENTATION AND TESTING PLAN
OF THE PROPOSED MODEL/SYSTEM
11
.
REFERENCES:-
[1] Dr. K.S. Tiwari, Supriya Bhagat, Nikita Patil, Priya Nagare, IOT Based Driver
Drowsiness Detection&Health MonitoringSystem.
[2] Rateb Jabbar, Mohammed Shinoy, Mohamed Kharbeche, Khalifa Al-Khalifa,
MoezKrichen, Kamel Barkaoui, Driver Drowsiness Detection Model using CNN
Techniques for AndroidApp.
[3] Md. Yousuf Hossain, Fabian Parsia George, IOT based Real-time Drowsy Driving
Detection System for the Prevention of RoadAccidents.
[4] Mika Sunagawa, Shin-ichi-Shikii, WataruNakai, Makoto Mochizuki, Koichi
Kusukame, and Hiroki Kitajima, Comprehensive Drowsiness Level Detection Model
Combining MultimodalInformation.
[5] Monagi H. Alkinani, Wazir Zada Khan, and Quratulain Arshad, Detecting Human
Driver Inattentive and Aggressive Driving Behaviour using Deep Learning: Recent
Advances, Requirements, and OpenChallenges.
[6] MkhuseliNgxande, Jules-Raymond Tapamo and Michael Burke, Driver drowsiness
detection using Behavioural measures and Machine Learning Techniques: A review of
state-of-arttechniques.
[7] Joao Ruivo Paulo, Gabriel Pires, and Urbano J. Nunes, Cross-Subject Zero Calibration
Driver‘s Drowsiness Detection: Exploring Spatiotemporal Image Encoding of EEG Signals
for CNNClassification.
[8] Naveen SenniappanKaruppusamy, Bo-Yeong Kang, Multimodal System to Detect
Driver Fatigue using EEG, Gyroscope, and ImageProcessing.
[9] Yaocong Hu, Mingqi Lu, Chao Xie, and Xiabo Lu, Driver Drowsiness Recognition via
3D Conditional GAN and Two-level AttentionBiLSTM.
[10] CeerthiBala U.K, Sarath TV, Internet of things based Intelligent Drowsiness Alert
System, Fifth International Conference on Communication and Electronics Systems
(ICCES2020).
12
13/12