HMI-and-Automation-Design-Recommendations
HMI-and-Automation-Design-Recommendations
(November 2022)
Authors
Shashank Mehrotra, Meng Wang, Nicholas Wong, Jah’inaya Parker, Shannon C. Roberts
University of Massachusetts–Amherst
Mehrotra: https://orcid.org/0000-0002-6749-3773
Wang: https://orcid.org/0000-0002-3304-0610
Wong: https://orcid.org/0000-0002-1707-9769
Parker: https://orcid.org/0000-0001-7583-5174
Roberts: https://orcid.org/0000-0002-0052-7801
Kim: https://orcid.org/0000-0001-9806-1204
Romo: https://orcid.org/0000-0002-4553-1556
Horrey: https://orcid.org/0000-0002-9533-4411
As vehicle technology and automation continue to advance, the need to keep drivers
engaged and informed of system status and system actions is becoming increasingly
important. This is especially true in cases where the driver will assume different
responsibilities when using automation or when the driver no longer needs to be actively
monitoring the driving environment for extended periods of time. The human–machine
interface (HMI) is of critical importance in such use cases.
This technical report summarizes a review of the literature concerning the use of different
HMIs in the study of driver performance following requests to intervene/resume vehicle
control. It also considers existing guidelines regarding the design or implementation in
HMI in automated vehicles. In considering both avenues, a new set of guidelines is
proposed. The report should be of interest to researchers, safety advocates, the automobile
industry, and government entities.
This report is a product of an active cooperative research program between the AAA
Foundation for Traffic Safety and the SAFER-SIM University Transportation Center.
Executive Director
AAA Foundation for Traffic Safety
Dawn Marshall
SAFER-SIM Director
National Advanced Driving Simulator
The University of Iowa
ii
About the Sponsors
Founded in 1947, the AAA Foundation for Traffic Safety in Washington, D.C., is a
nonprofit, publicly supported charitable research and education organization dedicated to
saving lives by preventing traffic crashes and reducing injuries when crashes occur.
Funding for this report was provided by voluntary contributions from AAA/CAA and their
affiliated motor clubs, individual members, AAA-affiliated insurance companies, and other
organizations or sources.
This publication is distributed by the AAA Foundation for Traffic Safety at no charge, as a
public service. It may not be resold or used for commercial purposes without the explicit
permission of the foundation. It may, however, be copied in whole or in part and distributed
for free via any medium, provided the Foundation is given appropriate credit as the source
of the material. The AAA Foundation for Traffic Safety assumes no liability for the use or
misuse of any information, opinions, findings, conclusions, or recommendations contained
in this report.
If trade or manufacturer’s names are mentioned, it is only because they are considered
essential to the object of this report and their mention should not be construed as an
endorsement. The AAA Foundation for Traffic Safety does not endorse products or
manufacturers.
The universities comprising SAFER-SIM study how road users, roadway infrastructure,
and new vehicle technologies interact and interface with each other using microsimulation
and state-of-the-art driving, bicycling, pedestrian simulators.
DISCLAIMER
The contents of this report reflect the views of the authors, who are responsible for the facts and the
accuracy of the information presented herein. This document is disseminated in the interest of
information exchange. The report is funded, partially or entirely, by a grant from the U.S.
Department of Transportation’s University Transportation Centers Program. However, the U.S.
Government assumes no liability for the contents or use thereof.
iii
Table of Contents
Abstract ................................................................................................................................... v
Introduction ........................................................................................................................... 1
Method..................................................................................................................................... 2
Eligibility Criteria .............................................................................................................. 2
Information Sources ........................................................................................................... 2
Search ................................................................................................................................. 2
Study Selection................................................................................................................... 3
Data Collection Process for Information Extraction ......................................................... 5
Top-Down Guidelines ..................................................................................................5
Bottom-Up Guidelines ................................................................................................6
Results ..................................................................................................................................... 7
Outcomes from Literature Search ..................................................................................... 7
General Patterns .........................................................................................................7
Study Outcomes ..........................................................................................................8
Correspondence with Top-Down Interface Guidelines.....................................................10
Bottom-up Recommendations ...........................................................................................13
Discussion ............................................................................................................................. 15
HMI Design Recommendations ........................................................................................16
Modality ....................................................................................................................16
Information Content and Control .............................................................................17
Timing and Stages ....................................................................................................17
Limitations and Future Research Needs..........................................................................17
References ............................................................................................................................ 18
iv
Abstract
v
Introduction
Technologies that allow parts of the driving task to be automated have become more
widely available over the past several years. Even entry-level vehicles now offer driver
support systems, known as Level 1 automation by the Society of Automotive Engineers
(SAE), such as adaptive cruise control (ACC) and lane keeping assistance (LKA) (SAE,
2021). When combined, these systems are classified as Level 2, where both lateral and
longitudinal control of the vehicle are automated, but still require the driver to be engaged
in the driving task and be ready to take over quickly. In the future, Level 3 systems, where
the driver is at times no longer responsible for monitoring the road, will be available on
production vehicles, further altering the relationship between the driver and their vehicle.
Vehicle automation, at any level, stands to change the role and responsibilities of
drivers using the systems. While these technologies offer safety and convenience to
motorists, they could pose a risk of being misused. For example, some drivers were found to
be 50% more likely to engage in secondary tasks when using Level 2 automation compared
to when they drove without the technology engaged (Dunn, Dingus, Soccolich, 2019). Also,
drivers using other forms of driver assistance systems (e.g., lane departure warnings or
forward collision warnings) sometimes find them annoying or disturbing to the point where
driving performance is negatively impacted (Biondi et al., 2014) or drivers completely
ignore them (Dijksterhuis et al., 2012). One way to mitigate this risk is using sensor-based
alert systems that monitor the driver and road environment. When these systems detect
driver inattention or road conditions that the automation cannot handle, they issue alerts
to the driver to return their attention to the road or to take over control of the vehicle.
These request to intervene (RTI) or request to monitor the driving environment more
closely are an important part of the driver–vehicle human–machine interface (HMI), as
they need to orient the driver to the driving task without being too startling, distracting, or
irritating—lest the driver decide to ignore them or turn off the technology all together. As
different levels of driving automation systems have different expectations for the driver, the
RTI and corresponding HMI can have a variety of goals. For example, drivers who are in-
the-loop (in physical control of the vehicle and monitoring the driving situation) may need a
subtle RTI, drivers who are on-the-loop (not physically controlling the vehicle but
monitoring the driving situation) may need a more overt RTI, and drivers who are out-of-
the-loop (not monitoring the driving situation) may need a highly explicit RTI (Merat et al.,
2019).
1
(e.g., Bazilinskyy & deWinter, 2015; Naujoks et al., 2019; van den Beukel & van der Voort,
2017).
In the context of vehicle automation, much research has been done on the different
modalities and design specifications of these RTI alerts and their accompanying HMIs, with
findings that are not always congruent or easy to interpret. As such, the current study
seeks to:
Method
Eligibility Criteria
Information Sources
Search
The databases were explored using an extensive keyword search across a variety of
categories. The search terms were combined using OR operators within each category and
2
using AND operators across different categories. The search terms associated with each
category were as shown in Table 1.
Table 1. Keywords.
Category Keywords
Automation "ADAS", "ADF", "SAE levels"
Driver state "Driver State Monitoring", "Driver support features", "DSF"
Automation features "Adaptive cruise control", "Lane-keeping assistance", "Autopilot"
"Human–machine interface", "Human–machine interaction", "HMI",
Human machine
"Human–computer–interaction", "Human Automation interactions”,
interactions
“Interface design”, “Multimodal interface"
"Warning", "Alert", "Alert modality", "Visual display", "Speech displays",
Alerts
"Voice I/O", "Vibrotactile display", "Haptic I/O"
"Situational Awareness", "Vigilance", "Monitoring", "Supervisory control",
Situational awareness "Out-of-the-loop", "Visual attention", "Perception", "Mode awareness",
"Gaze Coordination"
"Reaction Time", "Take-over", "Takeover request", "Time to Collision",
Driver response
"Control transition", "Transition time", "Steering"
"Non-driving related task", "Dual-task performance", "Distraction”, "Fatigue",
Driver impairment
"Distracting task", "Distracted driving", "Secondary task"
While searching the WoS database, the scope of the topics was further narrowed
using pre-existing database categories: “Computer Science Interdisciplinary Applications,”
“Transportation,” “Ergonomics,” and “Transportation Science Technology.” Similarly,
results from the TRID database were further filtered according to the “Safety and Human
Factors” category. The categories and search terms were selected after reviewing with the
research team, as well as reviewing a small set of relevant articles to check if those terms
were included.
Study Selection
Figure 1 illustrates the process of gathering and selecting studies. The search terms
resulted in an expansive list of articles. A list of 5,645 unique articles was obtained from
the WoS database and a list of 8,254 unique articles was obtained from the TRID database,
yielding 13,899 articles in the original set. (If any article appeared in both databases, it was
attributed to WoS.)
These articles were screened and filtered by the research team who reviewed the
title, and sometimes the abstract, for general relevance. Prior to this, to ensure there was
good inter-rater agreement and clarity on the selection criteria, each team member
reviewed the same set of 200 articles from WoS and 100 articles from TRID. There was an
average of 90.8% agreement for articles from WoS and 87.8% agreement for articles from
TRID. The title/abstract review resulted in the selection of 883 articles.
3
Figure 1. Process of filtering articles to the final list of papers
Following the quick filtering of articles based on title (and abstract), a more careful
round of review was conducted to further filter the articles. Although the original search
was more inclusive (see Search section above), the selection criteria for this round of review
were more stringent to better align with the current objectives. The inclusion criteria were
as follows:
• Involved a road vehicle application, whether for passenger and commercial vehicles
• Implemented Level 1 automation (i.e., ACC or LKA) or higher
• Implemented an HMI or an alert system to notify the driver
• Measured driver performance or behavior (e.g., situation awareness, takeover time,
brake reaction time, or glance behavior)
Only articles that met all criteria were assessed further, yielding a total of 194
articles. Articles that met some but not all the desired criteria (N = 405) were logged for
future use, but are not discussed further in this report. Next, the full text articles (N = 194)
were downloaded and assessed by a second team member. This two-stage process yielded 88
eligible papers. Articles that did not meet all criteria were excluded (N = 106). Finally, each
respective team member examined the reference list of each of the selected articles (N = 88)
to identify additional relevant publications. This resulted in the addition of 16 new articles
to the review, i.e., 16 new articles that were not identified in the original search. During
4
this same process, 8 articles were excluded due to duplications or other issues. In total, 96
papers met all the criteria and were reviewed in detail.
Team members examined the full articles to extract specific information related to
the research objectives. This key information included the following:
Top-Down Guidelines
5
Table 2. Top-down guidelines synthesized from existing sources.
Guideline Description
Provide alerts that allow drivers to come back into the control-loop in time
Alert for smooth transition
without causing startle reactions
Use auditory signals as a base attention retrieving signal, especially for
Auditory attention
urgent situations
Use visual interfaces to enable more content-rich transfer of information
Visually informative
and to allow users their own pace of information retrieval
Use multimodal interfaces together in a complementary fashion,
Multimodal
especially for those with impairments and for urgent situations
Along the line of sight Present high-priority information close to the driver’s expected line of sight
Provide continuous feedback and feed forward information on system-
state (e.g., providing information on activation, deactivation, availability,
Continuous feedback
and malfunction without causing counterproductive effects, like
distraction)
Use signal intensity (e.g., frequency, wavelength, pace, and duration) to
Appropriate alert intensity
indicate perceived urgency, but ensure not to annoy the driver
Use written words to express different levels of urgency, like ‘‘Danger”
Clarity in alert message
compared to ‘‘Notice”
Keep operators involved in the control-loop (perception, decision-making,
Involve driver in the control-loop
and implementation)
Alert towards source of danger Use alerts to orient the user towards the source of danger
No need for continuous alert Ensure that time-critical interactions with the system do not require
monitoring continuous attention
Continuous mode display Display system mode continuously
Unintentional state change Minimize the potential for unintentional activation and deactivation
Use commonly accepted or standardized symbols to communicate the
Standardized symbology automation mode; use of non-standard symbols should be supplemented
by additional text explanations or vocal phrases
Use tactile interfaces for cueing distracted drivers’ attention back to the
Tactile cueing
road
Group HMI elements together according to their function to support the
Appropriate mode grouping
perception of mode indicators
In case of sensor failures, display their consequences and required
System failure contingency
operator steps
Bottom-Up Guidelines
First, conclusions, limitations, and reviewer comments from the original 96 studies
were compiled (see also section below “Study Outcomes”). These data were used to inform a
listing of potential bottom-up guidelines. A single mention of an HMI design suggestion was
considered sufficient at the outset, provided it was justified and explainable based on the
study’s findings. An example can be seen from Lin et al. (2020) who concluded that visual
iconography coupled with audio alerts can elicit appropriate responses from drivers. This
was included as a bottom-up recommendation: prioritization of pictographic images. Next,
the list was compared to the top-down guidelines to ensure that any guidance derived from
the studies was novel or mutually exclusive. For example, Borojeni et al. (2016) concluded
that visual alerts in the direction of the obstacle that caused automation to disengage can
help drivers perform more effective takeovers; this conclusion is consistent with the top-
down recommendation that warns the driver of the source of danger and so was not
considered as new guidance. Then, the conclusions, limitations, and reviewer comments
were reviewed for duplicates, i.e., if two studies pointed towards the same conclusion, they
6
were combined into a single guideline. For example, Louw et al. (2017) concluded that early
avoidance should be the focus of HMI design. However, Merat et al. (2014) also concluded
that HMI messages regarding takeover requests need to be timely and predictable (to the
extent possible). Taken together, these conclusions imply that the focus of HMI design
should be on pre-empting crashes and allowing for early avoidance. Finally, those
conclusions, limitations, and reviewer comments that were neither a top-down
recommendation, nor were a duplicate of findings from similar studies were compiled into
the final list of bottom-up guidelines.
Results
The results are presented in three different parts. First, the overall outcome of the
literature search is described, including general patterns across categories and interface
characteristics (e.g., interface modality and levels of automation). Then, study outcomes
were grouped based on significant findings commonly referenced across study. Next, HMIs
from the literature search were evaluated against the 17 top-down guidelines shown in
Table 2 (i.e., from best practices in past literature). Finally, a complementary listing of
bottom-up guidelines was identified from the literature search. For example, if several
articles included a feature or HMI design element that showed favorable results but was
not already covered by the top-down guidelines, it was considered in the final list of HMI
design guidelines. Note that, in general, each article describes one HMI, hence we refer
HMIs as opposed to articles in the sections below.
Overall, the majority of the 96 HMIs focused on RTI requests. Very few HMIs gave
drivers information concerning other details of the automation, such as its status. More
specifically, the HMIs included in this review were generally subject to one of two different
research objectives—(1) to test the effectiveness of RTIs by examining driver performance
in different situations or contexts, and (2) to evaluate an HMI that included takeover
requests. Some HMIs considered different aspects of takeover requests, driver states (e.g.,
distraction or time constraints), driver demographics, and driving scenarios. In many cases,
non-driving related tasks were introduced to assess the impact of driver state on the
effectiveness of the HMI for takeover requests. The majority of HMIs were evaluated in a
driving simulator or laboratory; while some were evaluated in the field or on a test track.
General Patterns
Modality of the Interfaces. Most HMI employed or examined in the set of studies
combined modalities (e.g., visual and auditory). More specifically, most HMIs used a
combination of audio-visual, visual-haptic, audio-visual-haptic interfaces (58%) for RTIs.
However, some HMIs only used the visual modality (24%), auditory modality (12%), or
haptic modality (4%).
7
few HMIs focused on Level 4 and above (10%). In certain HMIs, the level of automation was
either not specified or was unclear (16%).
Almost all HMIs were assessed using dependent variables like response time.
Additionally, scenario-specific driving performance measures like speeding, lane change
duration, lane deviation, braking behavior, and offset from lane center were used to
evaluate driving behavior. In addition to performance measures, eye-glance behavior, such
as anticipatory glances, gaze dispersion, glances at latent hazards, glances towards
potential hazards, and overall eye glance behavior were used as dependent variables.
Study Outcomes
Visual Displays. Visual alerts mapped spatially in the direction of the obstacle that
caused automation to disengage improved driving performance (e.g., Borojeni et al., 2016).
In addition, the visual modality can effectively convey critical information. For example,
LED strips presented at the bottom of the windshield that give information about the
status and intention of the automation have been shown to support more appropriate and
effective takeover maneuvers (e.g., Wright et al., 2017; Wulf et al., 2015). As another
example, heads up display alerts can decrease cognitive workload during takeovers as
compared to alerts displayed on a mobile device (X. Li et al., 2020).
Visual augmented reality (AR) displays can support takeover under certain
conditions, but there are situations where it might degrade performance relative to other
display locations (Lindemann et al., 2019). The presentation of RTIs in a skeumorphic
interface, where display features mimic their real-world counterpart, such as a map with a
8
combination of automation capability information, is more effective than an abstract
interface (e.g., Brandenburg & Chuang, 2019; Gold et al., 2016).
In Level 3 driving, contextual haptic cues can be located on the driver’s body or on
the driver’s seat to assist in decision making (e.g., Borojeni et al., 2017; Kamezaki et al.,
2019). Relatedly, haptic displays that help drivers to be more spatially aware of their
surroundings during driver takeover led to faster responses, shorter duration lane changes,
and more scans to the rearview mirror during lane change (Pradhan et al., 2019; Tijerina et
al., 2016).
9
Timing of Alerts. Timing of HMI alerts have been shown to be an important
influence on takeover quality: for example, lateral accelerations were more pronounced in a
time-critical scenario and longer time budgets led to smoother control transitions (Cui et
al., 2017; Doubek et al., 2020). Others have shown that the available takeover time affected
the driver’s takeover performance more than the urgency of the request (Roche &
Brandenburg, 2018).
It is also important to consider the duration of displayed alerts; some studies have
found that longer displays of an alert resulted in increased takeover times (Louw,
Markkula, et al., 2017; Louw, Merat, et al., 2017).
HMIs identified in the literature search were evaluated to determine how many
adhered to the top-down interface guidelines. As shown in Table 3, there was significant
disparity in the percentage of HMIs that subscribed to the 17 top-down guidelines. Alert for
smooth transition was the most common feature in HMIs gathered in the review (82%). Use
of the auditory channel to capture attention, providing informative visual information, and
using multiple modalities were also common design features. Figure 2 provides a high-level
visualization of the co-occurrence of design guidelines within each of the 96 HMIs,
illustrating the large degree of variability in terms of number and clusters across HMI.
(Note that this is intended only as a coarse representation of the general pattern and not a
detailed breakdown of each individual study.)
Table 3: Percentage of HMIs in the literature search that comply with top-down design
principles presented in decreasing order by percentage of HMIs
Design principle Percentage
Alert for smooth transition 82.3
Auditory attention 76.0
Visually informative 63.5
Multimodal 62.5
Along the line of sight 53.1
Continuous feedback 51.0
Appropriate alert intensity 50.0
Clarity in alert message 49.0
Involve in control-loop 44.8
Alert towards source of danger 43.8
No need for continuous alert monitoring 42.7
Continuous mode display 41.7
Unintentional state change 28.1
Standardized symbology 28.1
Tactile cueing 20.8
Appropriate mode grouping 20.8
System failure contingency 2.1
10
Warning towards
No continuous
Tactile cueing
Standardized
mode display
Unintentional
state change
Line of sight
contingency
control-loop
Appropriate
Appropriate
Continuous
Continuous
informative
Multimodal
symbology
monitoring
Involve in
transition
message
feedback
grouping
Clarity in
attention
Auditory
intensity
Smooth
Visually
Failure
source
61
18
20
46
57
69
14
22
44
48
53
54, 55
75
86
2
42
49
56
66
94
95
11
43
52
67
71
72
77
80
92
10
13
29
38
47
59
74
84
91
3
6
12
16
25
27
30
34
45
64
65
85
4
9
15
28
32
36
40
58
96
37
41
50
62
76
88
90
1
5
17
23
68
70
83
93
19
31, 39
73
82
87
89
7
8
21
51
63
78
79
33
35
24, 26
60, 81
Figure 2. Distribution of HMIs across the 17 top-down design guidelines. Each row in the
figure represents an article/HMI (see References for mapping to article numbers); each
column represents a guideline. Dark shaded cells indicate HMI adhered to the guideline.
Interaction Between HMI, Automation Level, and Modality.
11
To further examine how HMI design interacted with other study features, a series of
heatmaps were created. The aim of these visualizations was to examine whether patterns of
implemented HMI guidelines varied by level of automation or the modality of HMI
employed.
Figure 3 shows the heatmap between the level of automation employed in the study
and the 17 top-down HMI guidelines. Automation levels, from 1 to 4 are depicted (and the
multiple category denotes HMIs that were examined for more than one automation level in
a given study). Note that the sequencing of the guidelines is determined by the dendogram
tree shown on the far-right side of the graph—a hierarchical clustering analysis that
indicates the relationship between guidelines. That is, if two guidelines are connected in
the dendogram, it indicates that the number of papers that abide by each guideline are
related (i.e., they belong to the same cluster).
Across all levels of automation, the most common guidelines are “multimodal,”
“visually informative,” “auditory attention,” and “alert for smooth transition”. The
guideline, “system failure contingency” was only exhibited in Level 3 automation and is
most likely a consequence of Level 3 automation being the only automation level wherein
the driver must be receptive to RTIs (i.e., they may not have to supervise the automation).
Many guidelines did not appear at all in Level 4 automation, e.g., “unintentional state
change” and “appropriate mode grouping,” presumably because Level 4 automation is the
only automation level wherein the driver becomes a passenger once the automation is
engaged. Interestingly, smooth transition and auditory were more prevalent in Level 3 and
Levels 2 and 3, respectively, where drivers might be more out-of-the-loop compared to Level
1 (where drivers are responsible for more aspects of driving) and Level 4 (where drivers are
less likely to be required to intervene).
Figure 3: Heatmap showing the relationship between levels of automation (x-axis) and HMI
guidelines (y-axis). A darker color represents a greater number of HMIs falling into that
category. The combined automation level indicates that the paper examined multiple levels
of automation. The figure also displays the result of hierarchical clustering by using
dendrograms.
12
Figure 4 shows the relationship between the interface modality and the HMI
guidelines. The combined interface category denotes HMIs with at least two modalities,
e.g., the visual and auditory modality.
Similar to the automation levels, across all modalities, the most common guidelines
are “multimodal,” “visually informative,” “auditory attention,” and “alert for smooth
transition.” Comparing the responses across rows, i.e., examining the pattern for one
guideline across the modalities, some logical patterns were evident: that auditory interfaces
are most likely to abide by the “auditory attention” guideline, visual interfaces are most
likely to abide by the “visually informative” guideline, haptic interfaces are most likely to
abide by the “tactile cueing” guidelines, and combined interfaces are most likely to abide by
the “multimodal” guideline. In addition, visual interfaces were more likely to be “along the
line of sight,” which naturally follows from the connection between modality type and the
guideline. Auditory and haptic interfaces are more likely to be associated with
unintentional state changes, given their propensity to help drivers’ attention (especially
when they might be engaged in other visual tasks). Last, haptic interfaces were more likely
to have the “appropriate alert intensity,” “alert towards the source of danger,” and
“continuous feedback.”
Figure 4: Heatmap showing the relationship between interface modality (x-axis) and HMI
guidelines (y-axis). A darker color represents a greater number of HMIs falling into that
category. The combined interface indicates that the paper examined multimodal interfaces.
The figure also displays the result of hierarchical clustering using dendrograms.
Bottom-up Recommendations
As discussed in the previous sections, pre-defined HMI guidelines can help assess
interface design and provide insights into the strengths and limitations of a given HMI.
While top-down guidelines are informative and help understand essential aspects of the
HMIs, bottom-up guidelines can identify intrinsic limitations of the HMIs. More
specifically, the authors often identify overarching principles (or lack thereof) within the
article’s conclusion or discussion section. Additionally, the reviewers often identify common
13
themes across articles. This section presents a set of guidelines compiled from the HMIs to
account for these factors.
Second, the intensity of an alert should increase as the available time (i.e., response
window) decreases. Though related to the top-down guideline of applying an appropriate
level of intensity, this considers a dynamic component that varies according to urgency.
Third, related but separate, staged or gradient alerts should be used to counter driver non-
response to earlier alerts. This includes variations in the delivery method (modality) as well
as other features.
Fourth, HMI alerts should be augmented or tailored based on input from driver
state monitoring systems (Hecht et al., 2018). For example, the HMIs can monitor the state
of the driver (i.e., whether they show signs of distraction or fatigue) to ensure that the
response is timely. One such approach could provide periodic attention maintenance alerts
throughout the drive to increase the situational awareness during unexpected automation
failures.
Finally, feedback about external objects and limitations should be emphasized over
general information about system confidence. For example, it has been shown that explicit
information about a system’s level of awareness of its environment vis-à-vis its limitations
can yield more favorable outcomes compared with generalized information regarding
uncertainty (e.g., Rezvani et al., 2016).
14
Table 4. Percentage of HMIs that comply with bottom-up recommendations.
Principle Percentage
When possible, visual interfaces should prioritize pictographic information over text- 22.9%
based messages.
The intensity of an alert should increase as the available time (i.e., response 13.5%
window) decreases.
Staged alerts should be used to counter driver non-response to earlier alerts. 10.4%
HMI alerts should be augmented or tailored based on input from driver state 4.2%
monitoring systems.
Feedback about external objects and limitations should be emphasized over general 2.1%
information about system confidence.
The frequency with which the new guidelines were observed in the set of 96 HMIs
was also examined (and shown in the percentages in Table 4), which is likened to the
strength of evidence. Fifty-two (over half of) HMIs met at least one of the bottom-up
guidelines. Many of the HMIs met two to three of the bottom-up guidelines whereas only
one HMI met four guidelines.
Discussion
Technologies that allow parts of the driving task to be automated have become more
widely available over the past several years. While these technologies offer safety and
convenience to motorists, they could pose a risk of being misused. Sensor based alert
systems that monitor the driver and road environment are one possible approach to help
drivers. For example, these systems may detect driver inattention or conditions that the
automation cannot handle. In doing so, they issue alerts to the driver to return their
attention to the road or to take over control of the vehicle. As part of the driver–vehicle
HMI, these RTIs need to quickly orient the driver to the driving task without being too
startling, distracting, or irritating—lest the driver decide to ignore them or turn off the
technology all together. While alerts have been the focus of much research and guidance
from other domains, it is important to consider how the design and implementation of HMIs
in the context of vehicle automation is best achieved. The purpose of the current study was
to review and synthesize existing research and guidance on HMIs and driver takeovers in
the context of vehicle automation and to propose a clear and comprehensive set of
recommendations that could inform future system development and implementation.
Based on the literature search, nearly 100 relevant articles were identified and
further examined. The majority of HMIs were evaluated in a driving simulator or
laboratory setting and focused on Level 3 automated systems. Not surprisingly, the focus or
purpose of these studies varied significantly, yielding a wide array of independent variables
under scrutiny, including but not limited to differences in alert types, alert modalities,
presentation of alerts, timing of takeover requests, request wording, request urgency, use
case scenarios, and various driver characteristics. Likewise, there was some variability in
the underlying outcome measures; however, given the original search criterion, these were
largely grounded in driving performance or behavioral measures (e.g., eye glance metrics).
In terms of the HMIs, the majority used multimodal approaches where visual
information was combined with auditory alerts. Collectively, the literature search yielded
15
useful insight into the design of HMI, types of information or feedback provided, timing and
urgency, and other aspects. As found in past studies, multimodal alerts are generally
preferred due to their effectiveness in informing the driver about takeover. Additionally,
multimodal alerts with cues of high urgency resulted in quicker reaction times, as
compared to a single-alert modality, and capture drivers’ peripheral attention. At the same
time, in a few instances, multimodal alerts were a source of perceived annoyance.
In evaluating how firmly the HMIs were grounded in some of the existing top-down
guidelines, it was noted that there was significant variability in the proportion of HMIs
that adhered to different guidance. This is not an indictment of past work as these
guidelines were developed in concert with some of the HMIs, not to mention that research
in this area is still emerging. In addition, some HMIs were developed for one specific
purpose (e.g., to mitigate distracted driving while using a Level 2 vehicle) and hence,
guidelines not relevant to that purpose were not germane. Overall, some guidelines were
reflected in many HMIs while others have not been widely studied in this domain—possibly
representing areas needing future research. For example, the most common guideline—
“Alert for smooth transition”—appeared in over 80% of the HMIs whereas “System failure
contingency” only appeared in 2% of the HMIs, implying that is an area ripe for future and
sustained research. It must be acknowledged that guidance, especially for the less
prevalent guidelines, may reflect good practices based on work in other domains or based on
fundamental design and human factors principles. It follows that the strength of evidence
varies across guidelines, even though many in practice should lead to more effective HMIs.
There was also a lot of variability in the number of different guidelines adhered to in
a single study (e.g., see Table 3). More research in this area could help evaluate how
combinations of guidance can help improve safety, performance, and user experience. Such
work can also help prioritize different features and expose areas or certain display
configurations where guidance might not work or interact as effectively (e.g., some design
approaches would render the HMI incompatible with some of the guidance).
Based on the review, a revised and final set of recommendations was distilled
borrowing from past guidance as well as from new information gleaned from studies
identified in the review (i.e., a more comprehensive merging of the top-down and bottom-up
guidance). In doing so, attempts were made to use clear, concise wording to ease
comprehension. The recommendations have been arrayed under broader, non-mutually
exclusive categories. In some cases, specific guidelines from other sources have been
combined. For example, in leveraging the visual modality to convey information to drivers,
several different approaches or sub-items are presented (Bazilinskyy & DeWinter, 2015;
Naujoks, Wiedemann, et al., 2019; van den Beukel & van der Voort, 2017).
Modality
16
information and standardized symbology over text-based messages. Text should be
used to supplement non-standard symbols, preferably in non-time critical situations.
3. Auditory and/or tactile displays should complement visual information and be used
to help reorient driver attention in critical situations. Sustained attention to HMI
should not be required in time critical situations.
1. Alert should give sufficient time to drivers to regain control safely and effectively.
2. The intensity of the alert should reflect the urgency of the situation, without being a
hindrance, distraction, or annoyance to driver. The intensity of an alert should
increase as the available time (i.e., response window) decreases.
3. Gradient or multi-staged alerts (e.g., first visual, then auditory) should be used to
help convey urgency and to counter non-responses to earlier alerts.
While the review has highlighted important guidelines for HMI development, there
remain areas for future research. First, some HMIs compiled in the review were evaluated
with small samples and need to be validated with larger samples to ensure generalizability.
Second, very few HMIs were evaluated on different data sources (e.g., physiological data,
driving data, and survey data) to understand the efficacy of the HMI in takeover
performance. Future research should consider a multi-faceted approach in terms of outcome
measures. Relatedly, many HMIs appear to be generated by research teams in support of a
research question or utilize features that have been incorporated into driving simulator
software packages. As such, there are potential gaps between some HMIs examined in the
context of these studies and existing HMIs in OEM production vehicles, which would
provide an understanding of how partial or conditional automation can be implemented
within the limitations of actual current systems. Third, while HMIs were categorized
according to modality and level of automation, another potentially more useful
characterization is level of engagement by the driver. Such a categorization could
differentiate HMIs based on urgency of actions or cognitive demand (remembering,
understanding, evaluating, etc.) and then relate level of engagement to the guidelines (e.g.,
17
count the number of HMIs that require high driver engagement and abide by the “along the
line of sight” guideline. Fourth, a meta-analysis would provide quantitative (and
complementary) evidence to bolster the findings of the literature review, subject to data
availability. A more precise mapping of different guidelines to safety and performance
outcomes would be helpful in prioritizing design elements. Ideally, such an effort would also
allow for a more careful delineation of HMI design approaches across different levels of
automation. Last, guidance will continue to evolve as technologies and driver
responsibilities change. As such, reviews such as these must be conducted on a regular
basis and/or be updated as necessary.
Though germane to the current discussion, the current review intentionally did not
focus on a large body of research specific to driver state monitoring systems: a system that
assesses if a driver is capable of safely completing a task as monitored through their
physiological state and driving behavior (Guettas et al., 2019), except in cases where these
systems were used in the context of alerting or an HMI. Driver state monitoring is an
integral feature of some automation-oriented HMI, though other systems do not necessarily
rely on inputs from driver state monitoring. From the review, there were many HMIs
focused on driver state monitoring: many were focused on the underlying data elements,
such as driver (e.g., physiological information) or vehicle based (e.g., steering/pedal inputs),
or on the algorithms that are used to generate predictions about the driver state. Driver
state monitoring does not supplant the need for good and thoughtful design of HMI and in-
vehicle alerts, but it may have an important role in the implementation, utility, and
acceptance of these features. For example, a system designed to reorient the drivers’
attention to the forward roadway might have fewer false positives if it considers
information about the driver’s point of regard, compared to a system that uses other
(non-driver state monitoring) inputs. Future research should consider the integration of
driver state monitoring systems into HMIs and how the thoughtful design of an HMI can
lead to a successful driver–automation partnership.
Finally, work is currently underway that leverages the current review of design
guidelines. This work examines the effects of different HMI configurations, based on their
implementation of different guidelines (i.e., basic HMI, which incorporates few guidelines
versus an enhanced HMI, which incorporates many), on various outcome measures.
References
18
Bazilinskyy, P., & DeWinter, J. (2015). Auditory interfaces in automated driving: An
international survey. PeerJ Computer Science, 1:e13. https://doi.org/10.7717/peerj-
cs.13
Biondi, F., Rossi, R., Gastaldi, M., & Mulatti, C. (2014). Beeping ADAS: Reflexive effect on
drivers’ behavior. Transportation Research Part F: Traffic Psychology and
Behaviour, 25, Part A, 27–33. https://doi.org/10.1016/j.trf.2014.04.020
*(3) Borojeni, S. S., Chuang, L., Heuten, W., & Boll, S. (2016). Assisting drivers with
ambient take-over requests in highly automated driving. AutomotiveUI ‘16:
Proceedings of the 8th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications, 237–244.
https://doi.org/10.1145/3003715.3005409
*(4) Borojeni, S. S., Wallbaum, T., Heuten, W., & Boll, S. (2017). Comparing shape-
changing and vibro-tactile steering wheels for take-over requests in highly
automated driving. AutomotiveUI '17: Proceedings of the 9th International,
Conference on Automotive User Interfaces and Interactive Vehicular Applications
221–225. https://doi.org/10.1145/3122986.3123003
*(5) Brandenburg, S., & Chuang, L. (2019). Take-over requests during highly automated
driving: How should they be presented and under what conditions? Transportation
Research Part F: Traffic Psychology and Behaviour, 66, 214–225.
https://doi.org/10.1016/j.trf.2019.08.023
*(6) Brandenburg, S., & Roche, F. (2020). Behavioral changes to repeated takeovers in
automated driving: The drivers’ ability to transfer knowledge and the effects of
takeover request process. Transportation Research Part F: Traffic Psychology and
Behaviour, 73, 15–28. https://doi.org/10.1016/j.trf.2020.06.002
*(7) Calvi, A., D’Amico, F., Ciampoli, L. B., & Ferrante, C. (2020). Evaluation of driving
performance after a transition from automated to manual control: a driving
simulator study. Transportation Research Procedia, 45, 755–762.
https://doi.org/10.1016/j.trpro.2020.02.101
Campbell, J. L., Richard, C., Brown, J., & McCallum, M. (2007). Crash Warning System
Interfaces: Human Factors Insights and Lessons. Washington, D.C.: National
Highway Traffic Safety Administration.
http://www.nhtsa.gov/DOT/NHTSA/NRD/Multimedia/PDFs/Crash%20Avoidance/200
7/CWS_HF_Insights_Task_5_Final_Rpt.pdf
*(8) Clark, H., McLaughlin, A. C., Williams, B., & Feng, J. (2017). Performance in takeover
and characteristics of non-driving related tasks during highly automated driving in
younger and older drivers. Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, 61(1), 37–41. https://doi.org/10.1177/1541931213601504.
*(9) Cohen-Lazry, G., Borowsky, A., & Oron-Gilad, T. (2020). The impact of auditory
continual feedback on take-overs in Level 3 automated vehicles. Transportation
Research Part F: Traffic Psychology and Behaviour, 75, 145–159.
https://doi.org/10.1016/j.trf.2020.10.003
19
*(10) Cohen-Lazry, G., Katzman, N., Borowsky, A., & Oron-Gilad, T. (2019). Directional
tactile alerts for take-over requests in highly-automated driving. Transportation
Research Part F: Traffic Psychology and Behaviour, 65, 217–226.
https://doi.org/10.1016/j.trf.2019.07.025
Collaud, R., Reppa, I., Défayes, L., McDougall, S., Henchoz, N., & Sonderegger, A. (2022).
Design standards for icons: The independent role of aesthetics, visual complexity
and concreteness in icon design and icon understanding. Displays 74, 102290.
https://doi.org/10.1016/j.displa.2022.102290
*(11) Cortens, B., Nonnecke, B., & Trick, L. M., (2019). Effect of alert presentation mode
and hazard direction on driver takeover from an autonomous vehicle. Driving
Assessment Conference, 10(2019) 133–139.
https://doi.org/10.17077/drivingassessment.
*(12) Cui, W., Zhou, R., Yan, Y., Ran, L., & Zhang, X. (2017). Effect of warning levels on
drivers’ decision-making with the self-driving vehicle system. AHFE 2017: Advances
in Human Aspects of Transportation, 720–729. https://doi.org/10.1007/978-3-319-
60441-1_69
Dijksterhuis, C., Stuiver, A., Mulder, B., Brookhuis, K. A., & de Waard, D. (2012). An
Adaptive Driver Support System: User experiences and driving performance in a
simulator. Human Factors, 54(5), 772–785.
https://doi.org/10.1177/0018720811430502
*(13) Doubek, F., Loosveld, E., Happee, R., & De Winter, J. (2020). Takeover quality:
assessing the effects of time budget and traffic density with the help of a trajectory-
planning method. Journal of Advanced Transportation, 2020.
https://doi.org/10.1155/2020/6173150
Dunn, N., Dingus, T.A. & Soccolich, S. (2019). Understanding the Impact of Technology: Do
Advanced Driver Assistance and Semi-Automated Vehicle Systems Lead to Improper
Driving Behavior? (Technical Report). Washington, D.C.: AAA Foundation for Traffic
Safety.
Edworthy, J. (1994). The design and implementation of non-verbal auditory warnings.
Applied Ergonomics, 25(4), 202–210. https://doi.org/10.1016/0003-6870(94)90001-9
*(14) Epple, S., Roche, F., & Brandenburg, S. (2018). The sooner the better: Drivers’
reactions to two-step take-over requests in highly automated driving. Proceedings of
the Human Factors and Ergonomics Society Annual Meeting, 62(1), 1883–1887.
https://doi.org/10.1177/1541931218621428
*(15) Eriksson, A., & Stanton, N. A. (2017). Takeover time in highly automated vehicles:
Noncritical transitions to and from manual control. Human Factors, 59(4), 689–705.
https://doi.org/10.1177/0018720816685832
*(16) Eriksson, A, Petermeijer, S. M., Zimmermann, M., De Winter, J. C. F., Bengler, K. J.,
& Stanton, N. A. (2019). Rolling out the red (and green) carpet: Supporting driver
decision making in automation-to-manual transitions. IEEE Transactions on
Human-Machine Systems, 49(1), 20–31. https://doi.org/10.1109/THMS.2018.2883862
20
*(17) Fitch, G. M., Hankey, J. M., Kleiner, B. M., & Dingus, T. A. (2011). Driver
comprehension of multiple haptic seat alerts intended for use in an integrated
collision avoidance system. Transportation Research Part F: Traffic Psychology and
Behaviour, 14(4), 278–290. https://doi.org/10.1016/j.trf.2011.02.001
*(18) Forster, Y., Naujoks, F., Neukum, A., & Huestegge, L. (2017). Driver compliance to
take-over requests with different auditory outputs in conditional automation.
Accident Analysis & Prevention, 109, 18–28.
https://doi.org/10.1016/j.aap.2017.09.019
*(19) Friedrichs, T., Ostendorp, M. C., & Lüdtke, A. (2016). Supporting drivers in truck
platooning: Development and evaluation of two novel human-machine interfaces.
AutomotiveUI ‘16: Proceedings of the 8th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications, 277–284.
https://doi.org/10.1145/3003715.3005451
*(20) Gaspar, J. G., Schwarz, C., Kashef, O., Schmitt, R., & Shull, E., (2018). Using Driver
State Detection in Automated Vehicles (SAFER-SIM Final Report). Iowa City, IA:
University of Iowa. http://safersim.nads-
sc.uiowa.edu/final_reports/UI%201%20Y1%20report.pdf
*(21) Gold, C., Körber, M., Lechner, D., & Bengler, K. (2016). Taking over control from
highly automated vehicles in complex traffic situations. Human Factors, 58(4), 642–
652. https://doi.org/10.1177/0018720816634226
Guettas, A., Soheyb, A., & Kazar, O. (2019). Driver State Monitoring System: A review.
BDIoT’19: Proceedings of the 4th International Conference on Big Data and Internet
of Things, 1–7. https://doi.org/10.1145/3372938.3372966
*(22) He, D., Kanaan, D., & Donmez, B. (2021). In-vehicle displays to support driver
anticipation of traffic conflicts in automated vehicles. Accident Analysis &
Prevention, 149, 105842. https://doi.org/10.1016/j.aap.2020.105842
Hecht, T., Feldhütter, A., Radlmayr, J., Nakano, Y., Miki, Y., Henle, C., & Bengler, K.
(2018). A review of driver state monitoring systems in the context of automated
driving. Proceedings of the 20th Congress of the International Ergonomics
Association (IEA 2018): Advances in Intelligent Systems and Computing, 823.
Springer, Cham. https://doi.org/10.1007/978-3-319-96074-6_43
*(23) Hester, M., Lee, K., & Dyre, B. P. (2017). “Driver take over:” A preliminary
exploration of driver trust and performance in autonomous vehicles. Proceedings of
the Human Factors and Ergonomics Society Annual Meeting 61(1), 1969–1973.
https://doi.org/10.1177/1541931213601971
*(24) Hock, P., Kraus, J., Walch, M., Lang, N., & Baumann, M. (2016). Elaborating
feedback strategies for maintaining automation in highly automated driving.
Automotive'UI 16: Proceedings of the 8th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications, 105–112.
https://doi.org/10.1145/3003715.3005414
Hoffman, J. & Lee, J. & Hayes, E., (2003). Driver preference of collision warning strategy
and modality. Driving Assessment Conference 2(2003), 69.
https://doi.org/10.17077/drivingassessment.1098
21
Horowitz, A. D., & Dingus, T. A. (1992). Warning signal design: A key human factors issue
in an in-vehicle front-to-rear-end collision warning system. Proceedings of the
Human Factors Society Annual Meeting, 36(13), 1011–1013.
*(25) Huang, G., Steele, C., Zhang, X., & Pitts, B. J. (2019). Multimodal cue combinations: A
possible approach to designing in-vehicle takeover requests for semi-autonomous
driving. Proceedings of the Human Factors and Ergonomics Society Annual Meeting,
63(1), 1739–1743. https://doi.org/10.1177/1071181319631053
*(26) Institute of Transportation Engineers (2018). Semi-autonomous connected vehicle
safety systems and collision avoidance: Findings from two simulated cooperative
Adaptive Cruise Control studies. ITE Journal 88(6), 30–35.
*(27) Johns, M., Mok, B., Talamonti, W., Sibi, S., & Ju, W. (2017, October). Looking ahead:
Anticipatory interfaces for driver-automation collaboration. 2017 IEEE 20th
International Conference on Intelligent Transportation Systems (ITSC), 1–7.
https://doi.org/10.1109/ITSC.2017.8317762
*(28) Johns, M., Strack, G., & Ju, W. (2018). Driver assistance after handover of control
from automation. 2018 21st International Conference on Intelligent Transportation
Systems (ITSC), 2104-2110. https://doi.org/10.1109/itsc.2018.8569499
*(29) Kamezaki, M., Hayashi, H., Manawadu, U., & Sugano, S. (2019). Human-centered
intervention based on tactical-level input in unscheduled takeover scenarios for
highly-automated vehicles. International Journal of Intelligent Transportation
Systems Research, 18(3), 451–460. https://doi.org/10.1007/s13177-019-00217-x
*(30) Kasuga, N., Tanaka, A., Miyaoka, K., & Ishikawa, T. (2020). Design of an HMI system
promoting smooth and safe transition to manual from Level 3 automated driving.
International Journal of Intelligent Transportation Systems Research, 18(1), 1–12.
https://doi.org/10.1007/s13177-018-0166-6
*(31) Kim, J., Kim, W., Kim, H. S., & Yoon, D. (2018). Effectiveness of subjective
measurement of drivers’ status in automated driving. 2018 IEEE 88th Vehicular
Technology Conference (VTC-Fall), 1–2.
https://doi.org/10.1109/VTCFall.2018.8690557
*(32) Kim, J. W., & Yang, J. H. (2020). Understanding metrics of vehicle control take-over
requests in simulated automated vehicles. International Journal of Automotive
Technology, 21(3), 757–770. https://doi.org/10.1007/s12239-020-0074-z
Kim, T.-Y., Ko, H., & Kim, S.-H. (2020). Data analysis for emotion classification based on
bio-information in self-driving vehicles. Journal of Advanced Transportation, 2020.
https://doi.org/10.1155/2020/8167295
*(33) Koo, J., Shin, D., Steinert, M., & Leifer, L. (2016). Understanding driver responses to
voice alerts of autonomous car operations. International Journal of Vehicle Design,
70(4), 377–392. https://doi.org/10.1504/IJVD.2016.076740
*(34) Körber, M., Prasch, L., & Bengler, K. (2018). Why do I have to drive now? Post hoc
explanations of takeover requests. Human Factors, 60(3), 305–323.
https://doi.org/10.1177/0018720817747730
22
Korpi, J., & Ahonen-Rainio, P. (2015). Design guidelines for pictographic symbols: evidence
from symbols designed by students. 1st ICA European Symposium on Cartography,
1–16.
Laughery, K. R., & Wogalter, M. S. (2006). Designing effective warnings. Reviews of Human
Factors and Ergonomics, 2(1), 241–271. https://doi.org/10.1177/1557234x0600200109
*(35) Lee, J., & Yang, J. H. (2020). Analysis of driver’s EEG given take-over alarm in SAE
level 3 automated driving in a simulated environment. International Journal of
Automotive Technology, 21(3), 719–728. https://doi.org/10.1007/s12239-020-0070-3
Lee, J. D., Gore, B. F., & Campbell, J. L. (1999). Display alternatives for in-vehicle warning
and sign information: Message style, location, and modality. Transportation Human
Factors, 1(4), 347–375. https://doi.org/10.1207/sthf0104_6
Lehto, M. R. (1992). Designing warning signs and warning labels: Part I—Guidelines for
the practitioner. International Journal of Industrial Ergonomics, 10(1–2), 105–113.
https://doi.org/10.1016/0169-8141(92)90052-2
*(36) Li, S., Blythe, P., Guo, W., & Namdeo, A. (2018). Investigation of older driver's
takeover performance in highly automated vehicles in adverse weather conditions.
IET Intelligent Transport Systems, 12(9), 1157–1165. https://doi.org/10.1049/iet-
its.2018.0104
*(37) Li, S., Blythe, P., Guo, W., & Namdeo, A. (2019). Investigating the effects of age and
disengagement in driving on driver’s takeover control performance in highly
automated vehicles. Transportation Planning and Technology, 42(5), 470–497.
https://doi.org/10.1080/03081060.2019.1609221
*(38) Li, S., Blythe, P., Guo, W., Namdeo, A., Edwards, S., Goodman, P., & Hill, G. (2019).
Evaluation of the effects of age-friendly human-machine interfaces on the driver’s
takeover performance in highly automated vehicles. Transportation Research Part F:
Traffic Psychology and Behaviour, 67, 78–100.
https://doi.org/10.1016/j.trf.2019.10.009
*(39) Li, X., Schroeter, R., Rakotonirainy, A., Kuo, J., & Lenné, M. G. (2020). Effects of
different non-driving-related-task display modes on drivers’ eye-movement patterns
during take-over in an automated vehicle. Transportation Research Part F: Traffic
Psychology and Behaviour, 70, 135–148. https://doi.org/10.1016/j.trf.2020.03.001
*(40) Lin, Q., Li, S., Ma, X., & Lu, G. (2020). Understanding take-over performance of high
crash risk drivers during conditionally automated driving. Accident Analysis &
Prevention, 143. https://doi.org/10.1016/j.aap.2020.105543
*(41) Lindemann, P., Müller, N., & Rigolll, G. (2019). Exploring the use of augmented
reality interfaces for driver assistance in short-notice takeovers. 2019 IEEE
Intelligent Vehicles Symposium (IV), 804–809.
https://doi.org/10.1109/IVS.2019.8814237
*(42) Llaneras, R. E., Cannon, B. R., & Green, C. A. (2017). Strategies to assist drivers in
remaining attentive while under partially automated driving: Verification of
human–machine interface concepts. Transportation Research Record, 2663(1), 20–
26. https://doi.org/10.3141/2663-03
23
*(43) Lorenz, L., Kerschbaum, P., & Schumann, J. (2014, September). Designing take over
scenarios for automated driving: How does augmented reality support the driver to
get back into the loop? Proceedings of the Human Factors and Ergonomics Society
Annual Meeting 58(1), 1681–1685. https://doi.org/10.1177/1541931214581351
*(44) Lotz, A., Russwinkel, N., & Wohlfarth, E. (2020). Take-over expectation and criticality
in Level 3 automated driving: a test track study on take-over behavior in semi-
trucks. Cognition, Technology & Work, 22(4), 733–744.
https://doi.org/10.1007/s10111-020-00626-z
*(45) Louw, T, Kountourioti, G, Carsten, O, et al. (2015) Driver inattention during vehicle
automation: How does driver engagement affect resumption of control? 4th
International Conference on Driver Distraction and Inattention (DDI2015), Sydney.
*(47) Louw, T., Markkula, G., Boer, E., Madigan, R., Carsten, O., & Merat, N. (2017).
Coming back into the loop: Drivers’ perceptual-motor performance in critical events
after automated driving. Accident Analysis and Prevention 108, 9–18.
https://doi.org/10.1016/j.aap.2017.08.011
*(46) Louw, T., & Merat, N. (2017). Are you in the loop? Using gaze dispersion to
understand driver visual attention during vehicle automation. Transportation
Research Part C: Emerging Technologies, 76, 35–50.
https://doi.org/10.1016/j.trc.2017.01.001
*(48) Louw, T., Merat, N., & Jamson, H. (2015). Engaging with highly automated driving:
To be or not to be in the loop? Driving Assessment Conference 8(2015), 190–196.
https://doi.org/10.17077/drivingassessment.1570
*(49) Manca, L., De Winter, J. C. F., & Happee, R. (2015). Visual Displays for Automated
Driving: A Survey [Paper presentation]. Workshop on Adaptive Ambient In-Vehicle
Displays and Interactions, Nottingham, UK.
http://doi.org/10.13140/RG.2.1.2677.1608
*(50) Merat, N., Jamson, A. H., Lai, F. C., Daly, M., & Carsten, O. M. (2014). Transition to
manual: Driver behaviour when resuming control from a highly automated vehicle.
Transportation Research Part F: Traffic Psychology and Behaviour, 27 Part B, 274–
282. https://doi.org/10.1016/j.trf.2014.09.005
Merat, N., Seppelt, B. D., Louw, T., Engström, J., Lee, J. D., Johansson, E., Green, C. A.,
Katazaki, S., Monk, C., Itoh, M., McGehee, D., Sunda, T., Unoura, K., Victor, T.,
Schieben, A., & Keinath, A. (2019). The “out-of-the-Loop” concept in automated
driving: proposed definition, measures and implications. Cognition, Technology and
Work, 21, 87–98. https://doi.org/10.1007/s10111-018-0525-8
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G. & The PRISMA Group. (2009). Preferred
reporting items for systematic reviews and meta-analyses: The PRISMA statement.
PLoS Medicine, 6(7). https://doi.org/10.1371/journal.pmed.1000097
*(51) Mok, B.K., Johns, M., Lee, K.J., Miller, D.B., Sirkin, D., Ive, H.P., & Ju, W. (2015).
Emergency, automation off: Unstructured transition timing for distracted drivers of
automated vehicles. 2015 IEEE 18th International Conference on Intelligent
Transportation Systems, 2458–2464. https://doi.org/10.1109/ITSC.2015.396
24
*(52) Mok, B., Johns, M., Yang, S., & Ju, W. (2017, October). Actions speak louder: Effects
of a transforming steering wheel on post-transition driver performance. 2017 IEEE
20th International Conference on Intelligent Transportation Systems (ITSC), 2017.
1–8, https://doi.org/10.1109/ITSC.2017.8317878.
*(53) Nair, P., Wang, W., & Lin, H. (2020). Lookie here! Designing directional user
indicators across displays in conditional driving automation. SAE Technical Paper
(No. 2020-01-1201). https://doi.org/10.4271/2020-01-1201
*(54) Naujoks, F., Forster, Y., Wiedemann, K., & Neukum, A. (2017). Improving usefulness
of automated driving by lowering primary task interference through HMI design.
Journal of Advanced Transportation, 2017. https://doi.org/10.1155/2017/6105087
Naujoks, F., Hergeth, S., Keinath, A., Wiedemann, K., & Schömig, N. (2019). Development
and application of an expert assessment method for evaluating the usability of SAE
Level 3 ADS HMIs. 26th International Technical Conference on the Enhanced Safety
of Vehicles (ESV): Enabling a Safer Tomorrow.
*(56) Naujoks, F., Höfling, S., Purucker, C., & Zeeb, K. (2018). From partial and high
automation to manual driving: relationship between non-driving related tasks,
drowsiness and take-over performance. Accident Analysis & Prevention, 121, 28–42.
https://doi.org/10.1016/j.aap.2018.08.018
*(58) Naujoks, F., Mai, C., & Neukum, A. (2014). The effect of urgency of take-over requests
during highly automated driving under distraction conditions. Proceedings of the 5th
International Conference on Applied Human Factors and Ergonomics AHFE, 2099–
2106.
Naujoks, F., Wiedemann, K., Schömig, N., Hergeth, S., & Keinath, A. (2019). Towards
guidelines and verification methods for automated vehicle HMIs. Transportation
Research Part F: Traffic Psychology and Behaviour, 60, 121–136.
https://doi.org/10.1016/j.trf.2018.10.012
*(57) Nees, M. A., Helbein, B., & Porter, A. (2016). Speech auditory alerts promote memory
for alerted events in a video-simulated self-driving car ride. Human factors, 58(3),
416–426. https://doi.org/10.1177/0018720816629279
*(59) Ono, S., Sasaki, H., Kumon, H., Fuwamoto, Y., Kondo, S., Narumi, T., Tanikawa, T. &
Hirose, M. (2019) Improvement of driver active interventions during automated
driving by displaying trajectory pointers—A driving simulator study. Traffic Injury
Prevention, 20:sup1, S152–S156, https://doi.org/10.1080/15389588.2019.1610170
*(60) Petermeijer, S., Doubek, F., & de Winter, J. (2017) Driver response times to auditory,
visual, and tactile take-over requests: A simulator study with 101 participants, 2017
IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2017,
1505–1510. http://dx.doi.org/10.1109/SMC.2017.8122827
*(61) Petermeijer, S. M., Cieler, S., & De Winter, J. C. (2017). Comparing spatially static
and dynamic vibrotactile take-over requests in the driver seat. Accident Analysis &
Prevention, 99, 218–227. https://doi.org/10.1016/j.aap.2016.12.001
*(62) Petermeijer, S., Bazilinskyy, P., Bengler, K., & de Winter, J. (2017). Take-over
again: Investigating multimodal and directional TORs to get the driver back into the
loop. Applied Ergonomics, 62, 204–215. https://doi.org/10.1016/j.apergo.2017.02.023
25
*(63) Pokam, R., Debernard, S., Chauvin, C., & Langlois, S. (2019). Principles of
transparency for autonomous vehicles: First results of an experiment with an
augmented reality human–machine interface. Cognition, Technology & Work, 21(4),
643–656. https://doi.org/10.1007/s10111-019-00552-9
*(64) Politis, I., Brewster, S., and Pollick, F. (2015). Language-based multimodal displays
for the handover of control in autonomous cars. AutomotiveUI '15: Proceedings of the
7th International Conference on Automotive User Interfaces and Interactive
Vehicular Applications, 3–10. https://doi.org/10.1145/2799250.2799262
*(65) Politis, I., Brewster, S., & Pollick, F. (2017). Using multimodal displays to signify
critical handovers of control to distracted autonomous car drivers. International
Journal of Mobile Human Computer Interaction, 9(3), 1–16.
https://doi.org/10.4018/ijmhci.2017070101
*(66) Pradhan, A. K., Crossman, J., & Sypniewski, A. (2019). Improving driver engagement
during L2 automation: A pilot study. Driving Assessment Conference 10(2019), 280–
286. https://doi.org/10.17077/drivingassessment.1707
Pritchett, A. R. (2009). Aviation automation: General perspectives and specific guidance for
the design of modes and alerts. Reviews of Human Factors and Ergonomics, 5(1), 82–
113. https://doi.org/10.1518/155723409X448026
*(67) Rezvani, T., Driggs-Campbell, K., Sadigh, D., Sastry, S. S., Seshia, S. A., & Bajcsy, R.
(2016). Towards trustworthy automation: User interfaces that convey internal and
external awareness. 2016 IEEE 19th International Conference on Intelligent
Transportation Systems (ITSC), 682–688. https://doi.org/10.1109/ITSC.2016.7795627
*(68) Roche, F., & Brandenburg, S. (2018). Should the urgency of auditory-tactile takeover
requests match the criticality of takeover situations. 2018 21st International
Conference on Intelligent Transportation Systems (ITSC), 1035–1040.
https://doi.org/10.1109/ITSC.2018.8569650
*(69) Roche, F., Somieski, A., & Brandenburg, S. (2018). Behavioral changes to repeated
takeovers in highly automated driving: effects of the takeover-request design and
the nondriving-related task modality. Human Factors, 61(5), 839–849.
https://doi.org/10.1177/0018720818814963
Rogers, W. A., Lamson, N., & Rousseau, G. K. (2000). Warning research: An integrative
perspective. Human Factors, 42(1), 102–139.
https://doi.org/10.1518/001872000779656624
*(70) Ruscio, D., Ciceri, M. R., & Biassoni, F. (2015). How does a collision warning system
shape driver’s brake response time? The influence of expectancy and automation
complacency on real-life emergency braking. Accident Analysis & Prevention, 77, 72–
81. https://doi.org/10.1016/j.aap.2015.01.018
SAE. (2021). Taxonomy and Definitions for Terms Related to Driving Automation Systems
(Standard No. J3016). SAE International.
https://www.sae.org/standards/content/j3016_202104/
26
*(71) Salminen, K., Farooq, A., Rantala, J., Surakka, V., & Raisamo, R. (2019). Unimodal
and multimodal signals to support control transitions in semiautonomous vehicles.
AutomotiveUI '19: Proceedings of the 11th International Conference on Automotive
User Interfaces and Interactive Vehicular Applications, 308–318.
https://doi.org/10.1145/3342197.3344522
*(72) Schmidt, J., Dreißig, M., Stolzmann, W., & Rötting, M. (2017). The influence of
prolonged conditionally automated driving on the take-over ability of the driver.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 61(1),
1974–1978. https://doi.org/10.1177/1541931213601972
*(73) Schwalk, M., Kalogerakis, N., Maier, T. (2015). Driver support by a vibrotactile seat
matrix −Recognition, adequacy and workload of tactile patterns in take-over
scenarios during automated driving. Procedia Manufacturing, 3 (2015) 2466–2473,
https://doi.org/10.1016/j.promfg.2015.07.507
*(74) Seppelt, B. D., & Lee, J. D. (2019). Keeping the driver in the loop: Dynamic feedback
to support appropriate use of imperfect vehicle control automation. International
Journal of Human-Computer Studies, 125, 66–80.
https://doi.org/10.1016/j.ijhcs.2018.12.009
*(75) Shull, E., Gaspar, J. G., Schmitt, R., & Vecera, S. (2019). Using Human-Machine
Interfaces to Convey Feedback in Automated Vehicles (SAFER-SIM Final Report).
Iowa City, IA: University of Iowa. http://safersim.nads-
sc.uiowa.edu/final_reports/UI%201%20Y2%20report.pdf
Stanton, N. A. (1994). Human Factors in Alarm Design. CRC Press.
https://doi.org/10.1201/9780203481714
*(76) Tang, Q., Guo, G., Zhang, Z., Zhang, B., & Wu, Y. (2021). Olfactory facilitation of
takeover performance in highly automated driving. Human Factors, 63(4), 553–564.
https://doi.org/10.1177/0018720819893137
*(78) Telpaz, A., Rhindress, B., Zelman, I., & Tsimhoni, O. (2015). Haptic seat for
automated driving: Preparing the driver to take control effectively. AutomotiveUI
'15: Proceedings of the 7th International Conference on Automotive User Interfaces
and Interactive Vehicular Applications, 23–30.
https://doi.org/10.1145/2799250.2799267
*(77) Tijerina, L., Blommer, M., Curry, R., Swaminathan, R., Kochhar, D. S., & Talamonti,
W. (2016). An exploratory study of driver response to reduced system confidence
notifications in automated driving. IEEE Transactions on Intelligent Vehicles, 1(4),
325–334. https://doi.org/10.1109/TIV.2017.2691158
*(79) Tobias, C., Su, C. Y., Kolburg, L., & Lathrop, B. (2013). Cocktail party effect&
attention capture in semi-autonomous driving. Driving Assessment Conference.
7(2013). https://doi.org/10.17077/drivingassessment.1528 .
van den Beukel, A. P., & van der Voort, M. C. (2017). How to assess driver’s interaction
with partially automated driving systems – A framework for early concept
assessment. Applied Ergonomics, 59, Part A, 302–312.
https://doi.org/10.1016/j.apergo.2016.09.005
27
*(80) van den Beukel, A. P., van der Voort, M. C., & Eger, A. O. (2016). Supporting the
changing driver’s task: Exploration of interface designs for supervision and
intervention in automated driving. Transportation Research Part F: Traffic
Psychology and Behaviour, 43, 279–301. https://doi.org/10.1016/j.trf.2016.09.009
*(81) van der Heiden, R. M. A., Iqbal, S. T., & Janssen, C. P. (2017). Priming drivers before
handover in semi-autonomous cars. CHI '17: Proceedings of the 2017 CHI Conference
on Human Factors in Computing Systems, 392–404.
https://doi.org/10.1145/3025453.3025507
*(83) Vlakveld, W., van Nes, N., de Bruin, J., Vissers, L., & van der Kroft, M. (2018).
Situation awareness increases when drivers have more time to take over the wheel
in a Level 3 automated car: A simulator study. Transportation Research Part F:
Traffic Psychology and Behaviour, 58, 917–929.
https://doi.org/10.1016/j.trf.2018.07.025
*(84) Vogelpohl, T., Kühn, M., Hummel, T., Gehlert, T., & Vollrath, M. (2018).
Transitioning to manual driving requires additional time after automation
deactivation. Transportation Research Part F: Traffic Psychology and Behaviour, 55,
464–482. https://doi.org/10.1016/j.trf.2018.03.019
*(85) Walch, M., Lange, K., Baumann, M., & Weber, M. (2015). Autonomous driving:
Investigating the feasibility of car-driver handover assistance. AutomotiveUI '15:
Proceedings of the 7th International Conference on Automotive User Interfaces and
Interactive Vehicular Applications, 11–18. https://doi.org/10.1145/2799250.2799268
*(86) Walch, M., Mühl, K., Baumann, M., & Weber, M. (2017). Autonomous driving:
Investigating the feasibility of bimodal take-over requests. International Journal of
Mobile Human Computer Interaction (IJMHCI), 9(2), 58–74.
http://doi.org/10.4018/IJMHCI.2017040104
*(87) Wandtner, B., Schömig, N., & Schmidt, G. (2018). Secondary task engagement and
disengagement in the context of highly automated driving. Transportation Research
Part F: Traffic Psychology and Behaviour, 58, 253–263.
*(88) Winter, J. D., Stanton, N. A., Price, J. S., & Mistry, H. (2016). The effects of driving
with different levels of unreliable automation on self-reported workload and
secondary task performance. International Journal of Vehicle Design, 70(4), 297–
324. https://doi.org/10.1504/IJVD.2016.076736
Wogalter, M. S., Conzola, V. C., & Smith-Jackson, T. L. (2002). Research-based guidelines
for warning design and evaluation. Applied Ergonomics, 33(3), 219–230.
https://doi.org/10.1016/S0003-6870(02)00009-1
Wogalter, M. S., & Laughery, K. R. (1996). Warning! Sign and label effectiveness. Current
Directions in Psychological Science, 5(2), 33–37. https://doi.org/10.1111/1467-
8721.ep10772712
*(89) Wright, T. J., Agrawal, R., Samuel, S., Wang, Y., Zilberstein, S., & Fisher, D. L.
(2017). Effects of alert cue specificity on situation awareness in transfer of control in
Level 3 automation. Transportation Research Record, 2663, 27–33.
https://doi.org/10.3141/2663-04
28
*(90) Wright, T. J., Agrawal, R., Samuel, S., Wang, Y., Zilberstein, S., & Fisher, D. L.
(2018). Effective cues for accelerating young drivers’ time to transfer control
following a period of conditional automation. Accident Analysis & Prevention, 116,
14–20. https://doi.org/10.1016/j.aap.2017.10.005
*(91) Wulf, F., Rimini-Doring, M., Arnon, M., & Gauterin, F. (2015). Recommendations
supporting situation awareness in partially automated driver assistance systems.
IEEE Transactions on Intelligent Transportation Systems, 16(4), 2290–2296.
https://doi.org/10.1109/TITS.2014.2376572
*(92) Yang, Y., Karakaya, B., Dominioni, G. C., Kawabe, K., & Bengler, K. (2018). An HMI
concept to improve driver’s visual behavior and situation awareness in automated
vehicle. 2018 21st International Conference on Intelligent Transportation Systems
(ITSC), 650–655. https://doi.org/10.1109/ITSC.2018.8569986
*(94) Yoon, S. H., Kim, Y. W., & Ji, Y. G. (2019). The effects of takeover request modalities
on highly automated car control transitions. Accident Analysis & Prevention, 123,
150–158. https://doi.org/10.1016/j.aap.2018.11.018
*(95) Yun, H., & Yang, J. H. (2020). Multimodal warning design for take-over request in
conditionally automated driving. European Transport Research Review, 12(1).
https://doi.org/10.1186/s12544-020-00427-5
*(96) Zeeb, K., Buchner, A., & Schrauf, M. (2015). What determines the take-over time? An
integrated model approach of driver take-over after automated driving. Accident
Analysis & Prevention, 78, 212–221. https://doi.org/10.1016/j.aap.2015.02.023
29