Using Facial Analysis To Combat Distracted Driving in Autonomous Vehicles
Using Facial Analysis To Combat Distracted Driving in Autonomous Vehicles
Abstract—Contrary to popular belief, current autonomous at risk in AVs. Factors such as facial orientation eye closure,
vehicles require an alert driver to be present to prevent injury or and drowsiness detection are vital to classifying a driver as
loss of life. This study proposes using facial analysis to determine distracted.
if a driver is distracted and alert them if they are. A facial
analysis system was developed to determine the orientation of the II. BACKGROUND
driver’s head, measure the eye closure duration, and find signs of
drowsiness. This was combined with an autonomous car model A. Autonomous Vehicles
to simulate a complete notification system. A series of tests using
the facial orientation, eye closure and drowsiness metrics were Autonomous vehicles are classified into six main levels,
conducted to determine the system’s functionality and limitations ranging from no autonomy to complete autonomy [6]. Level
to basic user distractions, such as texting, choosing music or 0 vehicles have no automation. Level 1 introduces assistance
looking at mirrors. A highway hypnosis detection method was features into the car, including automatic steering or acceler-
also separately developed, but has yet to be validated. The ation. While Level 2 vehicles, such as General Motors’ Super
prototype was able to detect a variety of distracted driving
scenarios and implement countermeasures specific to autonomous Cruise and Tesla’s Autopilot, can manage speed, braking,
cars. and steering, they still require a human driver to constantly
supervise the vehicle. Level 3 AVs, unlike level 2, are capable
I. I NTRODUCTION of taking full control with little supervision, but drivers must
Nearly 1.25 million people die in road accidents every year, still remain vigilant in case of emergencies. In fact, most
making it the leading cause of death among people between accidents involving Level 3 vehicles occur as a result of slow
the ages of 15 and 29. Specifically, the widespread role of driver reaction time. For instance, Google was forced to pull
inattention in crashes is estimated to account for 25% of its Level 3 AV from the market as it was found that drivers
driving accidents [1] - [3]. The introduction of autonomous were too trusting and not quick enough to respond. Cars with
vehicles (AVs) was meant to make roads safer because they Level 4 and 5 autonomy can drive completely without human
are able follow road regulations better than humans. The intervention. While this eliminates the issue of driver response
problem, however, arises when AVs switch to manual control time, these cars have yet to reach the market [6].
in situations that are less predictable, like interacting with
pedestrians or cyclists [4]. B. Detecting Distracted Driving
Despite the fact that AVs still require the attention of users, There are several methods of assessing driver alertness. A
drivers are persistent that the vehicle near perfect at driving. recent study found that the most effective method for detecting
This leads to behavior that may endanger the driver, such as symptoms of monotony was a combination of galvanic skin re-
falling asleep or looking away from the road to check a text sponse and electrooculography (EOG - blink rate) [7]. Invasive
message. Even the drivers who understand the dangers of not studies have also utilized electroencephalogram (EEG), which
paying attention to the road may find it difficult to keep an measures brain waves as a method of detection for highway
eye on their surroundings. This is due to the fact that they hypnosis. It has been proven that speed of eyelid opening,
may suffer from passive fatigue, which stems from a lack of delay of reopening, percent of eye closure, and blink intervals
vehicle control input by the user. Such fatigue may reduce are all indicators of drowsiness [7]. Other detection methods
driver alertness and lead to drowsiness or even sleep. [5] include analyzing seat movement, steering movement, lane
Currently, most vehicles equipped with distraction detection swerving, car speed fluctuations, and heart rate[8].
systems only check for one type of distraction. Consequently, 1) Glance Time: Drivers routinely glance away from the
there is a need for a detection model that takes into account road at visual clutter. Studies have shown that this can lead to
several factors at once. This, coupled with an effective alert longer off-road glances [8]. According to Rockwell’s six-year
system, can be used to prevent drivers from putting themselves study involving 106 subjects participating in 200 test sessions,
1
off-road glances should not exceed two seconds in duration.
This is commonly known as Rockwell’s 2 seconds, and is
used in the Alliance of Automobile Manufacturers (AAM)
and National Highway Traffic Safety Administration (NHTSA) Fig. 2. HC-05 Bluetooth Module
guidelines as limits on how long a driver can be looking away
from the road [9]. A driver’s field of view (FOV) is considered
to be bounded 45° to the left and right and 22.5° downward
[10]. Although glances to mirrors are not included in this FOV,
the mean duration for mirror glances is about 1.1s [11]. This
would not be considered dangerous by Rockwell’s 2 Second
Rule.
2) PERCLOS: In 1994, the PERCLOS diagnostic was
developed to detect drowsiness. PERCLOS is an abbreviation
for ”percentage of eye closure” which measures the proportion Fig. 3. Arduino UNO Microcontroller
of time the eyes are more than 80% closed in a 3 minute
time period. The metric was established as part of a driver 2) Arduino Uno and HC-05 Bluetooth: The Arduino Uno
simulation study by The National Highway Traffic Safety (Fig. 3) is a microcontroller board commonly used in elec-
Administration [12]. The study found that using PERCLOS tronics projects. Its 14 digital and 6 analog input and output
was more reliable than using blink rate and blink duration for pins allow it to interface with a multitude of devices at once.
drowsiness detection. A USB-B port on the board allows users to upload their own
3) Highway Hypnosis: Highway hypnosis is the cause of Arduino code through the Arduino Integrated Development
approximately 100,000 police-reported crashes[13]. Highway Environment (IDE), which uses a dialect of C++ developed
hypnosis is a form of automaticity, a behavior which allows by Arduino specifically for their boards. The USB cable
people to perform actions without consciously thinking about also allows the Uno to draw external power from a laptop.
them. In these cases, the driver often cannot recall what However, it can also be powered by a supply of 7 - 12V.
happened after they regain alertness. Highway hypnosis is The Uno can communicate with other devices over serial
commonly compared to drunk driving because in both cases communication through an HC-05 Bluetooth module (Fig. 2),
the driver’s reaction times plummet [14]. This is extremely a component which uses standard Bluetooth 2.0 protocols.
dangerous in autonomous vehicles, where control of the car
may be handed over to the driver at any time.
C. Resources and Materials
1) Facial Analysis with DLIB: Facial analysis begins with
detecting a face within the camera frame. Once a face has
beend detected, facial landmarks (Fig. 1) can be drawn on Fig. 4. HC-SR04 Ultrasonic Sensor
the face . These facial landmarks can be generated using
Dlib, an open source software library used in various facial 3) HC-SR04 Ultrasonic Sensor: To operate completely
analysis application. It uses machine learning to create a map autonomously, the Arduino vehicle requires an HC-SR04
representing different regions of the given face. The map Ultrasonic Sensor (Fig. 4), which is oriented towards the
consists of 68 points that can be used to perform mathematical vehicle’s right side, allowing it to find safe locations to park.
analysis on various changes in these regions [15]. Measuring An ultrasonic sensor operates by emitting a high-frequency
physiological signals, such as brain waves or heart rate, to sound wave and measuring the time needed for the wave to
detect distracted driving are less practical because electrodes return after reflecting off of an object. Using this time interval
would have to be directly attached to the driver [16]. Mean- and the speed of sound through air, the distance to the object
while, using image analysis is less intrusive and therefore, a is calculated. The HC-SR04 sensor specifically has a range of
more feasible approach for real-world implementation. 2cm to 400cm, accurate to 3mm.
III. P ROCEDURE
A. Detection Software
The detection system encompasses three different scenarios
in which a driver is not alert: looking away from the road,
having their eyes closed, and being drowsy. The first test,
facial orientation, checks if the driver is facing the road.
Additionally, the system checks if the driver’s eyes are open.
Sometimes, even if the driver is facing the road with their eyes
Fig. 1. A map of the facial landmarks used by Dlib and PERCLOS open, they may still be in a state of drowsiness characterized
2
by droopy eyes. The PERCLOS metric is used to identify the
driver’s drowsiness. After an extended period of driving, the
driver might be in a state of highway hypnosis, where they lose
all sense of consciousness even though their eyes are open. A
live feed of the driver’s face is given to the system from a
Logitech C920 Pro HD Webcam. The high resolution allows
for better detection of facial landmarks and a more accurate
result. As the autonomous vehicle in this research will not
have a driver seated inside of it, the camera will be a part of
a separate system that processes the video and communicates Fig. 7. Coordinates on outside of eye
with the vehicle through Bluetooth.
1) Facial Orientation Detection: A facial landmark map of
3) Eye Closure Detection: Dlib generates a facial landmark
points was created by Dlib for each frame of the video. The
where the boundary of each eye is defined by six points. The
points to analyze for facial orientation were found by using the
six points are used to calculate the Eye Aspect Ratio (EAR)
3D projection of the face onto the screen. From this projection,
defined by the following formula where Pn represents point n
it was observed that when the head was turned to either the
as seen if formula (1).
left or the right, the distance between points 14 and 55, as well
as 4 and 49, respectively (Fig. 1), increased or decreased. The
distance between the points was calculated using the Euclidean |P2 − P6 | + |P3 − P5 |
EAR = (1)
distance formula (Fig. 5). 2|P1 − P4 |
q As the eye lid closes, the EAR approaches zero. A calibra-
df l = (p2x − p1x )2 + (p2y − p1y )2 tion program was developed to determine one’s EAR threshold
when eyes are closed. This method works regardless of one’s
Fig. 5. Euclidean Distance Formula distance from the web camera because the same normalization
used in the facial orientation was used here. The program
These distances were correspondingly labeled left and right
determines the duration of eyelid closures. Once eyelid closure
distances for each side of the face. It was observed that the
duration exceeds 2 seconds, a person is classified as distracted.
system incorrectly classified the driver’s orientation depending
This information is relayed to an alert system. Following an
on their distance from the camera. This was due to the fact
initial alert at 2 seconds, a second alert will occur if eye closure
that the 3D projection of the driver took up more of the frame
persists for at least another 3 seconds.
as the driver went closer to or away from the webcam. To
4) PERCLOS: PERCLOS detection analyzes the same set
solve this, the ratio between the left and right sides was used
of DLIB facial landmark points as eye closure. The first step of
instead of the raw values. This prevented false classifications
detection is to determine what percentage of the eye is closed
because both values would increase in proportion as the driver
for each frame. A base eye area is calculated for when the
moved towards or away from the camera.
eye is completely open. For each frame, the current area of
the eye is used to calculate the percent of eye closure. A rolling
average of the proportion of frames where eyes with greater
than 80% closure is recorded over 3 minutes. This PERCLOS
value is compared with the set metric of 0.15, determined by
Wierwille et al.[18].
In order to account for changes in eye area caused by
varying distance from the camera, an area adjustment function
was added. It is assumed that the true base length of the
eye (distance between points 1 and 4) does not change
based on the closure of the lid. Hence, the change in base
length of the 3D projection can be used as a coefficient of
expansion. In calibration, the distances between points (1-2,
Fig. 6. Visual Area in Car rear-view mirrors 2-3, 3-4, 4-5, 5-6, 6-1) along with base length (1-4) are
calculated for when ones eyes are completely opened. These
2) Facial Orientation Calibration: A calibration program distances are then adjusted based on the previously calculated
was written in order to define the driver’s field of view. Given coefficient of expansion to the change in base length for each
the distance between the driver and webcam, it is possible frame. This is important because these distances represent
to tilt the driver’s head at an exact angle. The left-right the maximum capacity of area one’s eye could be open at a
ratio values for those angles were recorded and then used as particular distance. The area with adjusted contour lengths
thresholds for determining the facial orientation of the driver. is then compared with eye area for the corresponding frame
3
by equation (2). Fig. 8 demonstrates the adjustment algorithm.
M easuredEyeArea − EyesClosedArea
1− > 0.8 (2)
EyesOpenedArea − EyesClosedArea
4
Fig. 12. L239D Motor Shield
Fig. 10. Notification Flowchart
In the center of the vehicle is the Arduino UNO microcon-
The third state of this system autonomously detects a troller, attached with four screws; placed on the Arduino’s pins
shoulder of the road and safely pulls the car over, which will is the L239D motor shield, which all sensors and motors are
occur when the driver is determined to be drowsy or distracted plugged into. This motor shield is what enables the Arduino
for a prolonged period of time. The mechanism to detect a to take in information from the sensors as well as drive the
shoulder and pull over was programmed using an ultrasonic motors through pulse-width modulation (PWM). The HC-05
sensor that measures distance away from the walls of the track Bluetooth module is also plugged into the shield, as well as
as it drives around. When the ultrasonic sensor detects the the battery holder, which houses two rechargeable 3.7V AA
indentation in the track, the car will autonomously pull over. batteries, used to power the vehicle. The notification system
was also attached to the chassis and connected to the Arduino
through a breadboard.
2) Testing Environment: To test the autonomous vehicle,
a specific environment was created to eliminate as many
variables as possible while still providing a useful model. A
track (Fig. 13(a)), similar to a raceway track, was created using
white foam board. In the center of the track, an inside wall
was built, which features a rectangular inlet designated as a
safe parking spot for the vehicle(Fig. 13(b)).
(a) Top view (b) Side view A black line was painted onto the track with a width of
1 14 inch thickness, slightly wider than the UCTRONICS line
sensor module itself. This ensures that all three light sensors
on the module report 0 when the vehicle is completely on
the line. The color black for the line was chosen for its high
contrast against the white foam board, drastically reducing the
possibility of false positives or negatives.
5
value of 0 if the reflected light sensed was low and a 1
otherwise.
Fig. 14. UCTRONICS Line Sensor Module Fig. 16. Test Subject’s Eyes Being Detected as Shut
Since the light sensors in the line sensor module are placed the same subject to determine if the expected outcome was
in side by side in a row perpendicular to the vehicle’s direction produced.
of motion, this can be used to tell which direction the line is 1) Facial Orientation: The driver faced the webcam and
curving underneath the vehicle. turned their head left, right, up, and down - both inside and
4) Bluetooth Communication: Once the driver has been outside the 45°left & right - 20°up & down field of view.
determined to be distracted or drowsy, the image recognition Each movement was replicated with 5 trials each and lasted
system must communicate this information to the Arduino at least 10 seconds. A successful trial means state 2 was
vehicle through Bluetooth communication. The HC-05 Blue- reached after 2 seconds outside FOV and state 3 was reached
tooth module plugged into the Arduino microcontroller uses after 3 additional seconds. Based on Table I, facial orientation
serial communication to receive the state of the driver during functions as expected.
operation.
By using a Python library called PySerial used for serial TABLE I
communication between devices, the Python program can send FACIAL O RIENTATION D ETECTION DATA
a value of 1, 2, or 3 to the HC-05 module. A value of Facial Orientation&No Face Present Left Right Up Down Straight
1 represents normal operation, a value of 2 represents mild Within FOV State 1 expected N/A 5/5 5/5 5/5 5/5 5/5
Outside FOV State 2+3 Expected 5/5 5/5 5/5 5/5 5/5 N/A
distraction, and a value of 3 represents severe distraction. After
decoding the value, the Arduino vehicle takes action to either
alert the driver if the value is 2 or pull over the vehicle if the
value is 3.
IV. RESULTS
A. Autonomous Vehicle Operation
The autonomous vehicle can follow the line adequately,
although it does display abrupt motions along the turns and
Fig. 15. Test Subject Being Recognized by Detection Software
over slight bumps. This is due to the motors’ poor performance
at low speeds, which requires the vehicle to move with the
2) Eye Closure Detection: Eyes were closed for the fol-
left motors and the right motors at the same speed in dif-
lowing time intervals for five trials. (1,3,4,6, and 7 seconds).
ferent directions. Therefore, the vehicle realigns itself before
From Table II, eye closure detection functions as expected
progressing forward, as opposed to following a smooth curve.
where it can be seen each state was properly diagnosed 5 out
The Bluetooth communication is very reliable, detecting a
of 5 times.
change in value promptly, which allows our model to alert the
driver as quickly as possible. When it receives the change TABLE II
in state from the image recognition software, the vehicle E YE C LOSURE D ETECTION DATA
reacts quickly as possible, given the minimal delay inherent Eyes Closed for State: Success Rate
in wireless communication. 1 Sec 1 5/5
The detection of a safe parking zone and automatic pull 3 Sec 2 5/5
4 Sec 2 5/5
over is successful, and allows the vehicle to parallel park 6 Sec 3 5/5
when needed without driver input. Pulling over is achieved 7 Sec 3 5/5
through a series of motor commands that are executed once
the ultrasonic sensor detects a safe zone, in this case being a 3) Drowsiness Detection: The driver kept their eyes closed,
wall shifted approximately 6 inches inwards of the car track. open, and squinted for over three minutes each per trial,
In the future, a feedback control system like a proportional- for 5 trials per eye position. When eyes were squinted and
integral-derivative controller can be used for smoother driving. closed, the PERCLOS metric was greater than 0.15 indicating
drowsiness, though it was less than 0.15 When eyes were open.
B. Technical Validation 4) Multiple People in FOV: The system was tested with
The 3 primary detection systems were tested for function- multiple people in frame. For each trial, the correct person
ality. For each detection system, simulations were run by (driver) was identified as they were closer to the camera.
6
C. Addressing Limitations of the distraction. Highway hypnosis detection was developed
Data was received by taking a test subject, who was put but not integrated with the previous three detection systems
into an isolated environment void of any visual and audio along with the notification process.
distractions. The subject was then given 10 minutes to adapt The three primary detection methods all proved to be suc-
to and become suited with the utilized driving simulation cessful in every tested distraction scenario. During simulation
software. The driving software simulated driving in a busy testing, the facial orientation software was able to recognize
city with real-world driving and road conditions. head turns in and outside the FOV 5 out of 5 times for each
1) Texting: Subject was informed to respond to a simple scenario. Furthermore, our eye closure detection software was
text while running the simulator. Subject was allowed to send able to pick up each state of distraction based on how long
the text from their own phone to a pre-entered number saved the test subject’s eyes were closed 5 out of 5 times for various
on the phone. Subject was split into two groups and told where timings. Our drowsiness detection results followed a similar
to hold their phones while texting, either looking at their phone trend where the driver was recognized 5 out of 5 times for
in their lap or holding their phone in front of them. Tests various trial scenarios. The software was shown to be effective
found that the system detected the cases when the subject had in detecting a variety of real-world distractions.
the phone below the field of view, but had limited success
A. Future Developments and Applications
detecting when subject kept phone infront of their face.
2) Choosing Music: Subject was informed to play any The solution developed can be expanded to a variety of
music of their choice while driving the simulator. Data was contexts. In future research, improvements that utilize more
collected before, during, and after the subjects selected music aspects of the system produce a safer, more versatile process.
on the radio. The system only detected distraction when the What makes this system unique is its application to
subject’s facial orientation was sufficiently outside of the field autonomous vehicles, which allows the car to take more
of view while changing the music. However, since the subject advanced reaction measures. Future developments include
is still not focused on the road while choosing music, this is smarter route changes. For example, highway hypnosis can be
a flaw in the system. caused by a monotonous driving scene. If highway hypnosis
3) Mirror: Subject was asked to look at a point on the is detected, autonomous vehicles may reroute to include a
wall that would represent the same angle required to look at less monotonous routine, which could include a more variable
the side-view and rear-view mirrors. For the mirrors that were setting, such as inner city driving. Additionally, GPS and road
within the field of view, distraction was not detected. Since maps already integrated in AVs can be used to locate pit spots
mirrors are something that need to be glanced at for longer and or more permanent rest stops that are better fit for helping
than 2 seconds at times, a more sophisticated system will need the driver recover. A technology like this would benefit the
to be developed to account for this. hauling industry, where high costs are incurred if truck drivers
4) Food and Drinks: A bottle of water was placed next are drowsy after long hours of driving. Another possible
to the subject, at the average distance and height of a cup implementation of this system is to integrate a feature which
holder. Subject was asked to unscrew the lid and take a sip of allows the system to communicate with other smart cars on the
water while running the simulator. Distraction was not detected road. This allows nearby cars to know of the drivers distracted
through facial orientation but some cases detected distraction state and take appropriate measures. A similar system can be
based on eye closure. created to communicate with non autonomous cars. However,
this would still require driver intervention on the part of the
V. C ONCLUSION non-autonomous vehicle.
This projesct proposed a solution to distracted driving, Many environmental factors and variations in the drivers
specifically tailored to autonomous vehicles. The success of visual characteristics need to be considered for the image-
the system supports that image recognition is effective in recognition software. Currently, the effect of skin color, eye
preventing accidents for autonomous vehicles. Multiple image color, or facial structure on the image recognition is unknown.
recognition and machine learning based techniques were used Furthermore, a change in background, brightness, or other
to detect driver distraction and drowsiness. A facial orientation aspect of the surroundings may alter the models success; this
algorithm was developed to determine if the driver is looking can be remedied through further testing of the model. Another
towards or away from the road. Eye closure detection algo- current limitation of the prototype is side glances that exceed 2
rithm was developed to determine duration of eyelid closures seconds that occur when the driver stops to check for other cars
based on the EAR. Lastly, percent eye closure (PERCLOS) before turning, glances at mirrors, or is backing up. Using data
was integrated to evaluate user drowsiness. This software was from AVs, it is possible to know when the car is performing
combined and was then able to communicate via bluetooth actions, such as turns, thus determining when it is expected
with an autonomous vehicle constructed using an Arduino for attentive glances to occur. More specifically, when the car
microcontroller. During normal operation, the vehicle followed is turning right, left, or backing up, the acceptable FOV in the
a line on a track; when commanded through bluetooth, the facial orientation program can be expanded.
vehicle either blinked an LED or automatically pulled over Though an LED was used as the notification system in this
when it detected a safe parking spot, depending on the severity research, a fully implemented solution would take advantage
7
of more of the cars functions to alerting the driver, including
increasing the volume of music, turning down the internal
temperature, or tugging on the drivers seatbelt.
8
A PPENDIX A
(a) (b)
Fig. 17. Arduino C++ code which describes the actions taken by the vehicle in the three different states. The code in 17(a) describes actions taken in states
1 and 2, 17(b) describes actions taken in state 3.
9
ACKNOWLEDGMENTS [5] M. Cummings, C. Mastracchio, K. Thornburg, and A. Mkrtchyan,
Boredom and Distraction in Multiple Unmanned Vehicle Supervisory
The authors of this paper gratefully acknowledge the follow- Control, Interacting with Computers, vol. 25, no. 1, pp. 3447, 2013.
ing: Head Residential Teaching Assistant Michael Higgins for [6] Autopilot, Tesla, Inc. [Online]. Available:
his efforts in aiding our project; project coordinators Michael https://www.tesla.com/autopilot. [Accessed: 21-Jul-2019].
[7] Galley N, Schleicher R; (2002), Fatigue Indicators from the ElectroOc-
DiBuono and Stephen Michaelowski for their invaluable as- culogram - a Research Report.
sistance in organizing this endeavor; project mentors Xavier [8] X. Liang, S. Liang, M. Yan, and X. Liang, Highway traffic density con-
Johnson, Kaitlin Taylor, Marissa DelRocini, Christopher Par- trol based on RBF neural network, 2017 Chinese Automation Congress
(CAC), 2017.
mentier, Allison Boyd, and Alexis Sutton for their valuable [9] Cerezuela, G. Pastor, Tejero, Pilar, Chliz, Mariano, Mauricio, and
knowledge of engineering and hands-on involvement; Dean Monteagudo, Wertheim’s hypothesis on ’highway hypnosis’: empirical
Jean Patrick Antoine, the Director of GSET for his man- evidence from a study on motorway and conventional road driving, Ac-
cident Analysis & Prevention, DeepDyve, 01-Nov-2004. [Online]. Avail-
agement and guidance; Research Coordinator Helen Sagges able: https://www.deepdyve.com/lp/elsevier/wertheim-s-hypothesis-on-
for her assistance in conducting proper research; Rutgers highway-hypnosis-empirical-evidence-from-a-GW3IiutlN1. [Accessed:
University, Rutgers School of Engineering, the State of New 25-Jul-2019].
[10] T. H. Rockwell and R. S. Miller, Risk Behavior in Driving, SAE
Jersey, and the NJ Space Grant Consortium for the chance Technical Paper Series, 1980.
to advance knowledge, explore engineering, and open up new [11] K. Kircher, C. Ahlstrom, and A. Kircher, Comparison of Two Eye-
opportunities; Lockheed Martin for funding of our scientific Gaze Based Real-Time Driver Distraction Detection Algorithms in a
Small-Scale Field Operational Test, Proceedings of the 5th International
endeavours; and lastly NJ GSET Alumni, for their continued Driving Symposium on Human Factors in Driver Assessment, Training,
participation and support. and Vehicle Design : Driving Assessment 2009, 2009.
[12] key-publications-road-safety-call-for-action-oct-2009-10pp, Human
R EFERENCES Rights Documents online, 2009.
[13] L. Tijerina, M. Gleckler, D. Stoltzfus, S. Johnston, M. J. Goodman, and
[1] Tim Horberry, Janet Anderson, Michael A. Regan, Thomas J. Triggs,
W. W. Wierville, PERCLOS: A Valid Psychophysiological Measure of
John Brown, ”Driver distraction: The effects of concurrent in-vehicle
Alertness As Assessed by Psychomotor Vigilance, PsycEXTRA Dataset,
tasks, road environment complexity and age on driving perfor-
1998.
mance,”Accident Analysis & Prevention,Volume 38, Issue 1, 2006,Pages
[15] .Kerr, J.S., 1991. ”Driving without attention mode (DWAM): a formal-
185-191,
isation of inattentive states in driving”. In: Gale, A.G., Brown, I.D.,
[2] Edquist J. ”The Effects of Visual Clutter on Driving Performance,”
Haslegrave, C.M., Moorhead, I., Taylor, S. (Eds.), Vision in Vehicles-III.
Monash University; Melbourne, Australia: 2008
Elsevier, North-Holland, pp. 473479.
[3] AAA Foundation for Traffic Safety, AAA Foundation. [Online]. Avail-
[16] Research on vehicle-based driver status/performance monitoring: Devel-
able: https://aaafoundation.org/. [Accessed: 21-Jul-2019].
opment, validation, and refinement of algorithms for detection of driver
[4] Brandt, R. Stemmer and A. Rakotonirainy, ”Affordable visual driver
drowsiness, PsycEXTRA Dataset, Dec. 1994.
monitoring system for fatigue and monotony,” 2004 IEEE Inter-
[17] B. Bhavya and R. A. Josephine, Intel-Eye: An Innovative System for
national Conference on Systems, Man and Cybernetics (IEEE Cat.
Accident Detection, Warning and Prevention Using Image Processing
No.04CH37583), The Hague, 2004, pp. 6451-6456 vol.7.
(A Two Way Approach in Eye Gaze Analysis), International Journal of
[14] Drowsy Driving - Stay Alert, Arrive Alive, Drowsy Driving Stay Computer and Communication Engineering, pp. 189193, 2013.
Alert Arrive Alive RSS. [Online]. Available: http://drowsydriving.org/.
[Accessed: 25-Jul-2019].
10