Final Report Driver
Final Report Driver
CHAPTER 1 INTRODUCTION..........................................................................1
CHAPTER 8 REFERENCES.............................................................................49
CHAPTER 1 INTRODUCTION
Systems that utilize GSM modules have the capability to deliver SMS alerts,
which include location information, in the event of accidents or when
drowsiness is detected [7] [8]. GSM (Global System for Mobile
Communications) technology provides a reliable and ubiquitous means of
sending text messages over cellular networks. In the context of driver
drowsiness detection systems, GSM modules can be integrated to automatically
send SMS alerts to designated contacts, such as emergency services or family
members, when an accident is detected or when the driver is exhibiting signs of
drowsiness. These alerts can include critical information, such as the vehicle's
location, the time of the event, and the driver's identity. The ability to send SMS
alerts via GSM modules ensures that assistance can be dispatched quickly and
efficiently, potentially reducing the severity of injuries or fatalities. The
integration of GSM technology into driver drowsiness detection systems
represents a valuable enhancement, providing an additional layer of safety and
security for drivers and passengers.
Eye blink sensors are employed to monitor the driver's eye movements and
detect changes in blink patterns, which serve as indicators of the driver's level of
alertness [2]. These sensors utilize various technologies, such as infrared (IR) or
image processing, to track the driver's eye movements and measure parameters
such as blink rate, blink duration, and eyelid closure. Changes in these
parameters can indicate that the driver is becoming drowsy or fatigued. For
example, a decrease in blink rate or an increase in blink duration may suggest
that the driver is struggling to stay awake. By continuously monitoring the
driver's eye movements and analyzing blink patterns, these sensors provide
valuable information about the driver's state of alertness. This information can
be used to trigger alerts or interventions to prevent accidents caused by driver
drowsiness. The use of eye blink sensors in driver drowsiness detection systems
offers a non-intrusive and reliable means of assessing the driver's level of
alertness and promoting safer driving behavior.
Analysis of the eye aspect ratio (EAR) stands out as an effective method for
real-time drowsiness detection, contributing to improved accuracy of detection
systems [10]. EAR is a mathematical measure that quantifies the distance
between the eyelids relative to the width of the eye. When a person becomes
drowsy, their eyelids tend to droop, resulting in a decrease in EAR. By
continuously monitoring EAR and comparing it to a predefined threshold, it is
possible to detect signs of drowsiness in real-time. The simplicity and
computational efficiency of EAR analysis make it well-suited for
implementation in driver drowsiness detection systems. The use of EAR
analysis enhances the accuracy of these systems by providing a reliable and
objective measure of eyelid closure, reducing the likelihood of false positives or
false negatives. The real-time nature of EAR analysis allows for timely
warnings to be issued to the driver, enabling them to take corrective action
before an accident occurs.
Infrared-based eye blink sensors offer the capability to accurately analyze
the blink patterns of a driver's eyes, enabling the detection of modifications to
the distance between the eyes [2]. These sensors utilize infrared light to
illuminate the driver's eyes and measure the reflected light. By analyzing the
reflected light, it is possible to determine the position and movement of the
eyelids. When a person blinks, the distance between the eyelids decreases,
resulting in a change in the reflected infrared light. By continuously monitoring
the reflected light and analysing blink patterns, it is possible to detect signs of
drowsiness or fatigue. For example, a decrease in blink rate or an increase in
blink duration may suggest that the driver is struggling to stay awake. Infrared-
based eye blink sensors offer a reliable and non-intrusive means of assessing the
driver's state of alertness, providing valuable information for triggering alerts or
interventions to prevent accidents caused by driver drowsiness.
Face detection techniques, which are based on facial landmarks, are utilized for
identifying and providing warnings to fatigued drivers [7]. These techniques
involve the use of computer vision algorithms to detect and locate the driver's
face in an image or video stream. Once the face is detected, facial landmarks,
such as the corners of the eyes, the tip of the nose, and the corners of the mouth,
are identified. By analyzing the position and movement of these landmarks, it is
possible to detect signs of drowsiness or fatigue. For example, drooping eyelids,
a slack jaw, or a head tilt may suggest that the driver is struggling to stay awake.
Face detection techniques offer a non-intrusive and reliable means of assessing
the driver's state of alertness, providing valuable information for triggering
alerts or interventions to prevent accidents caused by driver drowsiness. The use
of facial landmarks enhances the accuracy of these techniques by providing a
detailed representation of the driver's facial features, reducing the likelihood of
false positives or false negatives.
Computer vision technologies play a critical role in recognizing the
driver's face, extracting images of the eyes, and detecting tiredness through real-
time processing of visual data [7]. These technologies involve the use of
sophisticated algorithms and image processing techniques to analyze the driver's
facial features and identify signs of drowsiness or fatigue. By continuously
monitoring the driver's face and eyes, these systems can detect subtle changes in
appearance that may indicate a decline in alertness. For example, the system
may detect drooping eyelids, a decrease in blink rate, or a change in pupil size.
The real-time processing of visual data allows for timely warnings to be issued
to the driver, enabling them to take corrective action before an accident occurs.
Computer vision technologies offer a non-intrusive and reliable means of
assessing the driver's state of alertness, providing valuable information for
triggering alerts or interventions to prevent accidents caused by driver
drowsiness.
Deep learning and computer vision algorithms enable the precise analysis
of facial landmarks and eye-blinking patterns, significantly enhancing the
accuracy of drowsiness detection systems [10] [11]. Deep learning models, such
as convolutional neural networks (CNNs), are trained on large datasets of facial
images to learn the complex patterns and features that are indicative of
drowsiness or fatigue. These models can then be used to analyze new images of
the driver's face and identify signs of drowsiness with high accuracy. By
combining deep learning with computer vision algorithms, it is possible to
create robust and reliable drowsiness detection systems that can operate in a
variety of driving conditions. The precise analysis of facial landmarks and eye-
blinking patterns allows for the early detection of drowsiness, enabling timely
warnings to be issued to the driver before their condition deteriorates. The use
of these advanced techniques represents a significant advancement in driver
drowsiness detection, offering the potential to significantly reduce the incidence
of fatigue-related accidents.
Critical alerts can be sent to emergency contacts via the GSM module,
ensuring prompt assistance in the event of an accident or other emergency
situations [14]. Emergency contacts, such as family members or friends, can be
designated to receive alerts in the event of an accident or other emergency
situations. These alerts can include critical information, such as the vehicle's
location, the time of the event, and the nature of the emergency. The GSM
module can be programmed to automatically send these alerts to the designated
emergency contacts via SMS or other communication channels. The ability to
send critical alerts to emergency contacts ensures that assistance can be
dispatched quickly and efficiently, potentially reducing the severity of injuries
or fatalities. The use of emergency contacts provides an additional layer of
safety and security for drivers and passengers, knowing that help will be on the
way in the event of an emergency.
The paper addresses the pervasive issue of vehicular accidents stemming from
suboptimal driving conditions, with a particular emphasis on driver impairment
due to intoxication, fatigue, and drowsiness. The study highlights the alarming
statistic that approximately 20% of accidents are attributed directly to driver
fatigue, underscoring the critical need for technological interventions to mitigate
this risk. The proposed research leverages technological advancements to
proactively prevent accidents, focusing on the implementation of an infrared
(IR) sensor system designed to monitor and regulate the driver's eye-blinking
behavior. This system utilizes an IR transmitter to emit infrared waves towards
the eyes, and an IR receiver to capture the reflected waves. A closed eye,
indicative of drowsiness or inattention, triggers an alarm mechanism. Beyond
drowsiness detection, the system incorporates a pulse sensor to monitor the
driver's physiological state and facilitate the transmission of SMS alerts in the
event of detected health issues. The core functionality of the system revolves
around the continuous monitoring of the driver's eye blinks, aiming to provide
timely warnings and prevent accidents caused by drowsy driving. The
implemented design incorporates a single eye blink sensor installed within the
vehicle. Furthermore, the system integrates a Global Positioning System (GPS)
to provide location-based alerts, enhancing the overall safety and
responsiveness of the system in emergency situations.
This project addresses the critical issue of driver fatigue and its detrimental
impact on road safety. It is designed to mitigate the risks associated with
diminished reaction time, impaired judgment, and reduced awareness levels in
drivers experiencing drowsiness, a significant contributor to vehicular accidents.
To combat this problem, the project centers on the development of a web-based
application capable of real-time assessment of driver alertness and drowsiness.
This system leverages advanced machine learning algorithms in conjunction
with sensor data to identify key indicators of fatigue. The core functionality
involves the implementation of computer vision and facial recognition
techniques to monitor crucial markers, including head posture, blink rates, and
eye movements. Data acquired from in-vehicle cameras and sensors is analyzed
to detect signs of drowsiness, such as head nodding and altered blinking
patterns. Upon detecting such indicators, the system provides timely alerts to
the driver via auditory or visual cues, prompting them to take a break or engage
in restorative actions to regain focus. The application's backend utilizes Python
and the dlib library to construct a learning model. This model is designed to
analyze and train on data, thereby enhancing its accuracy in identifying
drowsiness indicators over time. The system's architecture is also designed for
seamless integration with contemporary in-vehicle technologies, ensuring real-
time operation with minimal computational burden. Ultimately, the project aims
to improve road safety by prioritizing driver well-being and promoting
awareness of the dangers associated with drowsy driving.
Ojha, Dhiren, Pawar, Amit, Kasliwal, Gaurav, Raut, R., and Devkar, Anita.
2023. "Driver Drowsiness Detection Using Deep Learning",
doi.org/10.1109/INCET57972.2023.10169941
L., Prof. Borhade G., Sandip, Sharmale Pooja, Manoj, Tamchikar Kanchan,
and Vasant, Pansare Purva. 2024. "Drowsy Driver Sleeping Device and
Driver Alert System". Shivkrupa Publication''s.
https://doi.org/10.48175/ijarsct-15482
Madni, Hamza Ahmad, Raza, Ali, Sehar, Rukhshanda, Thalji, Nisrean, and
Abualigah, Laith. 2024. "Novel Transfer Learning Approach for Driver
Drowsiness Detection Using Eye Movement Behavior". Institute of Electrical
and Electronics Engineers. https://doi.org/10.1109/access.2024.3392640
This innovative system, designed to mitigate road accidents and foster safer
driving practices, utilizes the Internet of Vehicles (IoV) to continuously monitor
driver behavior and deliver timely warnings when hazardous actions are
identified. At its center is a Raspberry Pi 3 Model B, which serves as the central
processing unit, orchestrating data acquisition and analysis from an array of
sensors. These sensors include an ultrasonic sensor measuring proximity to
other vehicles, an ADC MCP3008 facilitating analog-to-digital conversion of
sensor data, a GPS module providing precise location information, and an
alcohol sensor identifying potential driver intoxication. Augmenting this sensor
suite is a USB camera, enabling visual monitoring of the driver, a Bluetooth
module for short-range communication, and a GSM module facilitating long-
distance communication for alerts. The system's functionality extends to real-
time feedback mechanisms. Upon detecting unsafe driving patterns, the system
immediately alerts the driver through both audible buzzers and visual cues
displayed on an LCD screen. In critical situations, the GSM module transmits
notifications to pre-designated emergency contacts, ensuring prompt assistance
when needed. The entire system operates using a 12V 1A power adapter. A key
feature is the integration of a web camera to monitor eye movements. The
captured video data is processed by the Raspberry Pi to identify signs of driver
drowsiness, triggering preventative measures. This distinguishes the system
from existing methodologies. By leveraging real-time data processing and IoV
connectivity, this system aims to proactively reduce the incidence of road
accidents and promote a safer driving environment.
Anyone can buy this device through online auction site or search engine.
Since the Arduino is an open-source hardware designs and create their own
clones of the Arduino and sell them,so the market for the boards is competitive.
An official Arduino costs about $30, and a clone often less than $20.
ATmega8 (Microcontroller)
16 MHz
EXTERNAL power
Figure 4.3 AC Adapter
Figure 4.4 ARDUINO can run off with USB or EXTERNAL power source
The most commonly used Character based LCDs are based on Micro controller
or other. In this section, we will discuss about character based LCDs, their
interfacing with various microcontrollers, various interfaces (8-bit/4-bit),
programming, special stuff and tricks you can do with these simple looking
LCDs which can give a new look to your application.
PIN DESCRIPTION
The most commonly used LCDs found in the market today are 1 Line, 2 Line or
4 Line LCDs which have only 1 controller and support at most of 80 characters,
whereas LCDs supporting more than 80 characters make use of two controllers.
Most LCDs with 1 controller has 14 Pins and LCDs with 2 controller has 16
Pins (two pins are extra in both for back-light LED connections). Pin
description is shown in the table below.
Display data RAM (DDRAM) stores display data represented in 8-bit character
codes. Its extended capacity is 80 X 8 bits, or 80 characters. The area in display
data RAM (DDRAM) that is not used for display can be used as general data
RAM. So whatever you send on the DDRAM is actually displayed on the LCD.
For LCDs like 1x16, only 16 characters are visible, so whatever you write after
16 chars is written in DDRAM but is not visible to the user.
Now you might be thinking that when you send an ASCII value to DDRAM,
the answer is CGROM. The character generator ROM generates 5 x 8 dot or 5 x
10 dot character patterns from 8-bit character codes (see Figure 5 and Figure 6
for more details). It can generate 208 5 x 8 dot character patterns and 32 5 x 10
dot character patterns. User defined character patterns are also available by
mask-programmed ROM.
As clear from the name, CGRAM area is used to create custom characters in
LCD. In the character generator RAM, the user can rewrite character patterns by
program. For 5 x 8 dots, eight character patterns can be written, and for 5 x 10
dots, four character patterns can be written. Later in this section will explain
how to use CGRAM area to make custom character and also making animations
to give nice effects to your application.
BF - BUSY FLAG
Busy Flag is an status indicator flag for LCD. When we send a command or
data to the LCD for processing, this flag is set (i.e BF =1) and as soon as the
instruction is executed successfully this flag is cleared (BF = 0). This is helpful
in producing and exact amount of delay. For the LCD processing. To read Busy
Flag, the condition RS = 0 and R/W = 1 must be met and The MSB of the LCD
data bus (D7) act as busy flag. When BF = 1 means LCD is busy and will not
accept next command or data and BF = 0 means LCD is ready for the next
command or data to process.
There are two 8-bit registers in controller Instruction and Data register.
Instruction register corresponds to the register where you send commands to
LCD e.g LCD shift command, LCD clear, LCD address etc. and Data register is
used for storing data which is to be displayed on LCD.
Interfaces
Interface to external SIM 3V/ 1.8V
Analog audio interface
RTC backup
SPI interface
Serial interface
Antenna pad
I2C
GPIO
PWM
ADC
5V operation
Simple to use
LEDs for output and power
Output sensitivity adjustable
Analog output 0V to 5V
Digital output 0V or 5V
Low Cost
Fast Response
Stable and Long Life
Good Sensitivity to Alcohol Gas
Both Digital and Analog Outputs
On-board LED Indicator
Technical Data
DO – Digital Output
AO – Analog Output
Features:
An eye blink sensor relies on infrared technology to detect if a person's
eye is closed. The sensor is made up of two components: an infrared transmitter
and an infrared receiver. The transmitter emits infrared waves onto the eye,
while the receiver searches for changes in the reflected waves, indicating that
the eye has blinked. The sensor's output is sent to a microcontroller board, such
as Arduino, Raspberry Pi, AVR, PIC, or other microcontrollers.
To create an eye blink sensor with an Arduino, one requires an eye blink
sensor, an Arduino board, and a switch between the power supply and the
Arduino. The sensor has three pins: GND (ground), 5V, and OP (output). The
sensor's output is transmitted to the Arduino board, which can be programmed
to execute various tasks based on the output.
Features:
Vibration sensor modules, like the SW-420, typically feature a normally
closed vibration sensor, LM393 comparator for digital output, adjustable
sensitivity, and LED indicators for power and output status, making them
suitable for various applications like alarms and vibration detection.
This chip provides a clean, digital output (0 or 1) when vibration is
detected, and the output is a comparator output, signal clean, good
waveform, strong driving ability, >15mA.
An onboard potentiometer allows users to adjust the vibration threshold,
controlling when the sensor triggers.
Typically operates on a 3.3V to 5V DC supply.
Features: The main feature of this motor is, it has magnetic properties,
lightweight, and motor size is small. Operates 5v DC Supply.
Features:
HC-05 is a Bluetooth module which is designed for wireless
communication. This module can be used in a master or slave
configuration.It has range up to <100m which depends upon transmitter
and receiver, atmosphere, geographic & urban conditions.
It is IEEE 802.15.1 standardized protocol, through which one can build
wireless Personal Area Network (PAN). It uses frequency-hopping spread
spectrum (FHSS) radio technology to send data over air.
It uses serial communication to communicate with devices. It
communicates with microcontroller using serial port (USART).
Speed: 60rpm
The first Arduino was introduced in 2005. The project leaders sought to
provide an inexpensive and easy way for hobbyists, students, and professionals
to create devices that interact with their environment using sensors and
actuators. Common examples for beginner hobbyists include simple robots,
thermostats and motion detectors. Arduino started in 2005 as a project for
students at the Interaction Design Institute Ivrea in Ivrea, Italy. The name
"Arduino" comes from a bar in Ivrea, where some of the founders of the project
used to meet. The bar itself was named after Arduino, Margrave A hardware
thesis was contributed for a wiring design by Colombian student Hernando
Barragan. After the Wiring platform was complete, researchers worked to make
it lighter, less expensive, and available to the open source community.
At a conceptual level, when using the Arduino software stack, all boards are
programmed over an RS232 serial connection, but the way this is implemented
varies by hardware version. Serial Arduino boards contain a level shifter circuit
to convert between RS232 level and TTL level signals. Current Arduino boards
are programmed via USB, implemented using USB to serial adapter chips such
as the FTDI FT232.
Some variants, such as the Arduino Mini and the unofficial Boarduino, use a
detachable adapter board or cable, Bluetooth or other methods. (When used
with traditional microcontroller tools instead of the Arduino IDE, standard AVR
ISP programming is used.) The Arduino board exposes most of the
microcontroller's I/O pins for use by other circuits. The Diecimila,
Duemilanove, and current Uno provide 14 digital I/O pins, six of which can
produce pulsewidth modulated signals, and six analog inputs, which can also be
used as six digital I/O pins. These pins are on the top of the board, via female
0.10inch (2.5 mm) headers. Several plugin application shields are also
commercially available. The Arduino Nano, and Arduino compatible Bare
Bones Board and Boarduino boards may provide male header pins on the
underside of the board that can plug into solder-less breadboards. There are
many Arduino compatible and Arduino derived boards. Some are functionally
equivalent to an Arduino and can be used interchangeably. Many enhance the
basic Arduino by adding output drivers, often for use in school level education
to simplify the construction of buggies and small robots. Others are electrically
equivalent but change the form factor sometimes retaining compatibility with
shields, sometimes not. Some variants use completely different processors, with
varying levels of compatibility.
[1] Nasri, Ismail, Karrouchi, Mohammed, Kassmi, K., and Messaoudi, A.. 2022.
"A Review of Driver Drowsiness Detection Systems: Techniques, Advantages
and Limitations". None. https://doi.org/None
[2] Babu, K. Avinash and Rao, G Eswara. 2024. "Driver Drowsiness Detection
System for Accident Prevention". None.
https://doi.org/10.47392/irjaeh.2024.0372
[4] Widyastuti, Nur Rachmi and Brilianti, D. F.. 2024. "The Impact of
Drowsiness on Road Traffic Accidents in Yogyakarta". None.
https://doi.org/10.58526/jsret.v3i4.555
[5] Ojha, Dhiren, Pawar, Amit, Kasliwal, Gaurav, Raut, R., and Devkar, Anita.
2023. "Driver Drowsiness Detection Using Deep Learning". None.
https://doi.org/10.1109/INCET57972.2023.10169941
[6] Khare, Shanu and Talwandi, Navjot Singh. 2024. "Enhancing Road Safety:
The Role of Intelligent Driver Drowsiness Detection Systems". None.
https://doi.org/10.1109/ICBDS61829.2024.10837023
[9] L., Prof. Borhade G., Sandip, Sharmale Pooja, Manoj, Tamchikar Kanchan,
and Vasant, Pansare Purva. 2024. "Drowsy Driver Sleeping Device and Driver
Alert System". Shivkrupa Publication''s. https://doi.org/10.48175/ijarsct-15482
[11] Srilakshmi, T., Reddy, H., Potluri, Yaswanthi, Burra, Lakshmi Ramani,
Thota, Mohana Vamsi, and Gundimeda, Rishita. 2023. "Automated Driver
Drowsiness Detection System using Computer Vision and Machine Learning".
None. https://doi.org/10.1109/ICSCDS56580.2023.10104942
[13] Shiny, G. Susan, Rajalakshmi, G., Kamalambika, K., Kavitha, P., and
Brentha, S.. 2023. "Driver Safety System using Arduino". Journal of Ubiquitous
Computing and Communication Technologies.
https://doi.org/10.36548/jucct.2023.3.003
[14] Madhuri, Tanuku SaiLakshmi and BalaVishnu, J.. 2024. "Monitoring and
Alert System for Hazardous Driving Behavior Using Internet of Vehicles
Technology". None. https://doi.org/10.1109/ISENSE63713.2024.10872025
[15] Raja, D., Barkavi, G., Aishwarya, S., Keerthana, K., and Vasudevan, V..
2022. "Alcohol Detection and Emergency Alert System using IoT".
International Conference Intelligent Computing and Control Systems.
https://doi.org/10.1109/ICICCS53718.2022.9788419
[18] Singh, Shivam, Bansal, Shreyash, and Parvez, Anjum. 2022. "A Deep-
Learning Method to Predicting Traffic Accidents Due to Drowse *".
International Conference Control and Robots.
https://doi.org/10.1109/ICCR56254.2022.9995795
[19] Thomas, Mrs. Athira M. 2024. "IoT- Based Drowsiness Monitoring and
Crash Detection System to Enhance Road Safety". INTERANTIONAL
JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND
MANAGEMENT. https://doi.org/10.55041/ijsrem31368
[21] Rode, Manoj, Bethi, Anirudh, Baddam, Santosh, Kondaveti, Bala Laxmi
Prasanna, Gadipally, Tejaswini, and Katroth, Naresh. 2022. "Driver Drowsiness
Detection Using Machine Learning". International Journal for Research in
Applied Science and Engineering Technology (IJRASET).
https://doi.org/10.22214/ijraset.2022.47516
[22] Ahmad, Khubab, Ping, Em Poh, and Aziz, Nor Azlina Ab.. 2023.
"Machine Learning Approaches for Detecting Driver Drowsiness: A Critical
Review". None. https://doi.org/10.15379/ijmst.v10i1.1815
[23] Pahariya, Shivam, Vats, Pratyush, and Suchitra, S.. 2024. "Driver
Drowsiness Detection using MobileNetV2 with Transfer Learning Approach".
None. https://doi.org/10.1109/ADICS58448.2024.10533606
[24] Suriya, M., Preethie, K., Amitha, M., Sumithra, M., Fathima, I. Sumaiya,
and Vishnu, M.. 2023. "An Efficient Driver Drowsiness Detection Using Deep
Learning". None. https://doi.org/10.1109/ICACCS57279.2023.10112803
[27] Madni, Hamza Ahmad, Raza, Ali, Sehar, Rukhshanda, Thalji, Nisrean, and
Abualigah, Laith. 2024. "Novel Transfer Learning Approach for Driver
Drowsiness Detection Using Eye Movement Behavior". Institute of Electrical
and Electronics Engineers. https://doi.org/10.1109/access.2024.3392640
[28] Alkishri, Wasin, Abualkishik, Abdallah M., and Al-Bahri, M.. 2022.
"Enhanced Image Processing and Fuzzy Logic Approach for Optimizing Driver
Drowsiness Detection". Applied Computational Intelligence and Soft
Computing. https://doi.org/10.1155/2022/9551203
[30] Albadawi, Yaman, Takruri, Maen, and Awad, Mohammed. 2022. "A
Review of Recent Developments in Driver Drowsiness Detection Systems".
Italian National Conference on Sensors. https://doi.org/10.3390/s22052069