A review of uncertainty visualization errors: Working memory as an explanatory theory - eBook PDF instant download
A review of uncertainty visualization errors: Working memory as an explanatory theory - eBook PDF instant download
https://ebooksecure.com/download/a-review-of-uncertainty-
visualization-errors-working-memory-as-an-explanatory-theory-
ebook-pdf/
https://ebooksecure.com/download/cultivating-an-understanding-of-
curiosity-as-a-seed-for-creativity-ebook-pdf/
https://ebooksecure.com/download/translation-as-a-science-
translation-as-an-art-2e-a-practical-approach-ebook-pdf/
http://ebooksecure.com/product/ebook-pdf-graduate-review-of-
tonal-theory-a-recasting-of-common-practice-harmony-form-and-
counterpoint/
http://ebooksecure.com/product/ebook-pdf-a-guide-to-writing-as-
an-engineer-4th-edition/
(eBook PDF) Mechanisms of Memory 2nd Edition
http://ebooksecure.com/product/ebook-pdf-mechanisms-of-
memory-2nd-edition/
http://ebooksecure.com/product/ebook-pdf-student-workbook-to-
accompany-graduate-review-of-tonal-theory/
https://ebooksecure.com/download/dimensions-of-uncertainty-in-
communication-engineering-ebook-pdf/
http://ebooksecure.com/product/ebook-pdf-probability-and-
statistics-the-science-of-uncertainty-second-edition/
https://ebooksecure.com/download/pediatric-endocrinology-and-
inborn-errors-of-metabolism-ebook-pdf/
ARTICLE IN PRESS
A review of uncertainty
visualization errors: Working
memory as an explanatory theory
Lace Padillaa,∗, Spencer C. Castrob, and Helia Hosseinpoura
a
Cognitive and Information Sciences Department, University of California Merced, Merced, CA, United States
b
Management of Complex Systems Department, University of California Merced, Merced, CA, United States
∗
Corresponding author: e-mail address: [email protected]
Contents
1. Introduction 2
2. Visualization decision-making framework 7
2.1 Visual array and attention 7
2.2 Working memory 8
2.3 Visual description 10
2.4 Graph schemas 11
2.5 Matching process 11
2.6 Instantiated graph schema 12
2.7 Message assembly 12
2.8 Conceptual question 12
2.9 Decision-making 13
2.10 Behavior 14
3. Uncertainty visualization errors 14
3.1 Early-stage processing errors 14
3.2 Middle-stage processing errors 22
3.3 Late-stage errors 31
4. Conclusions 34
References 36
Abstract
Uncertainty communicators often use visualizations to express the unknowns in data,
statistical analyses, and forecasts. Well-designed visualizations can clearly and effectively
convey uncertainty, which is vital for ensuring transparency, accuracy, and scientific
credibility. However, poorly designed uncertainty visualizations can lead to misunder-
standings of the underlying data and result in poor decision-making. In this chapter,
we present a discussion of errors in uncertainty visualization research and current
approaches to evaluation. Researchers consistently find that uncertainty visualizations
requiring mental operations, rather than judgments guided by the visual system, lead
to more errors. To summarize this work, we propose that increased working memory
demand may account for many observed uncertainty visualization errors. In particular,
the most common uncertainty visualization in scientific communication (e.g., variants of
confidence intervals) produces systematic errors that may be attributable to the appli-
cation of working memory or lack thereof. To create a more effective uncertainty
visualization, we recommend that data communicators seek a sweet spot in the work-
ing memory required by various tasks and visualization users. Further, we also recom-
mend that more work be done to evaluate the working memory demand of uncertainty
visualizations and visualizations more broadly.
1. Introduction
From simple analyses, such as those used in introductory statistics text-
books, to the complex forecasts of pandemic projection models, uncertainty
presents a difficult challenge for those seeking to represent and interpret it.
Uncertainties that can arise throughout a modeling and analysis pipeline
(Pang, Wittenbrink, & Lodha, 1997) are of interest to many fields. To con-
strain the complex category of uncertainty to its component parts, scholars
commonly distinguish between several types of uncertainty: ontological
(uncertainty created by the accuracy of the subjectively described reality
depicted in the model), epistemic (limited knowledge producing uncertainty),
and aleatoric (inherent irreducible randomness of a process; Spiegelhalter,
2017). Additionally, quantified forms of aleatoric and epistemic uncertainty
are referred to as risk in decision-making domains (Knight, 2012). In this chap-
ter, we define uncertainty to encompass quantifiable and visualizable uncer-
tainty, such as a probability distribution.
Many people have difficulty reasoning with even simple forms of uncer-
tainty (Gal, 2002). One study found that 16–20% of 463 college-educated
participants could not correctly answer the question, “Which represents the
larger risk: 1%, 5%, or 10%?” (Lipkus, Samsa, & Rimer, 2001). Other work
finds that even experts with training in statistics commonly misunderstand
how to interpret statistical significance from frequentist 95% confidence inter-
vals (Belia, Fidler, Williams, & Cumming, 2005). These findings—that even
simple forms of uncertainty are challenging for college graduates and statisti-
cians to understand—should concern both the scientific community and soci-
ety. We should be concerned because we all make both small- and large-scale
decisions with uncertainty throughout our lives, such as picking stocks to
invest in or evaluating our pandemic risk.
In the context of textual expressions of uncertainty, researchers propose
that people have difficulty understanding probabilities when expressed as a
ARTICLE IN PRESS
percent (e.g., 10% chance of rain), because this framing is not how we expe-
rience probabilities in our daily lives (Gigerenzer & Hoffrage, 1995). A sub-
stantial body of research demonstrates that if we express uncertainty in
the form of frequency (e.g., it will rain 1 of 10 times), the representation
becomes more intuitive (e.g., Gigerenzer, 1996, 2008; Gigerenzer &
Gaissmaier, 2011; Gigerenzer & Hoffrage, 1995; Gigerenzer, Todd, &
ABC Research Group, 2000; Hoffrage & Gigerenzer, 1998). This line of
inquiry takes the perspective that humans can effectively reason with uncer-
tainty if, and only if, the information is presented in an intuitive way.
In addition to research on textural expressions of uncertainty, a large body of
evidence demonstrates that communicating uncertainty visually can help peo-
ple make more effective judgments about risk (for reviews see, Kinkeldey,
MacEachren, Riveiro, & Schiewe, 2017; Kinkeldey, MacEachren, &
Schiewe, 2014; Maceachren et al., 2005; Padilla, Kay, & Hullman, 2021).
Researchers propose that visualizations leverage the substantial processing
power of the visual system (Zacks & Franconeri, 2020), recruiting roughly half
of the brain (Van Essen, Anderson, & Felleman, 1992). Visualizations allow a
viewer’s visual system to complete some complex processing efficiently, such as
pattern recognition and data comparisons (Szafir, Haroz, Gleicher, &
Franconeri, 2016), which would be more challenging to do mathematically.
The power and efficiency of the visual system creates an advantage for visual-
izations over textual expressions of uncertainty. For example, consider how
long it takes to read about the following two treatments and how challenging
it is to decide which is riskier.
Treatment A: 3 of 10 patients have side effects.
Treatment B: 6 of 45 patients have side effects.
Now consider the same comparison of treatments but visualized using
the icon array in Fig. 1.
Treatment A Treatment B
Fig. 1 Icon arrays showing the proportion of patients with side effects in red after
receiving hypothetical treatments A or B.
ARTICLE IN PRESS
Fig. 2 Example hurricane track forecast cone produced by National Hurricane Center
(https://www.nhc.noaa.gov/aboutcone.shtml).
ARTICLE IN PRESS
it is easier to predict the temperature for tomorrow than the temperature for
2 weeks from now. However, when uncertainty in the storm’s path is rep-
resented visually with a cone-like visualization, it requires effort to under-
stand it as anything other than the size of the storm.
Within traditional uncertainty visualization research, practitioners
commonly recommend a set of best practices or general principles without
positing cognitive theories as to why a visualization might produce errors.
However, uncertainty visualization researchers are increasingly interested
in cognitive perspectives (Fernandes et al., 2018; Hullman, Kay, Kim, &
Shrestha, 2017; Kale, Kay, & Hullman, 2020; Kale, Nguyen, Kay, &
Hullman, 2018; Kim, Walls, Krafft, & Hullman, 2019). Notably, Kim
et al. (2019) propose a Bayesian cognitive modeling approach to incorporate
prior beliefs and update evaluations of uncertainty visualizations. Also, Joslyn
and Savelli (2020) detail the cognitive mechanisms associated with a specific
type of reasoning error in uncertainty visualization. Although prior approaches
have detailed the cognitive aspects of reasoning with uncertainty visualizations,
they do not offer a unified theory that describes the sources of errors across
visualization types. As a result, accurately predicting when a new type of
uncertainty visualization will fall into the category of helpful or harmful is
difficult.
The current chapter seeks to bridge this gap in knowledge by providing
a unifying theory for why errors occur when making decisions with uncer-
tainty visualizations. We begin this work by describing a cognitive framework
for how decisions are made with visualizations (Padilla, Creem-Regehr,
Hegarty, & Stefanucci, 2018), which we subsequently use as a tool to ground
empirical work on errors in uncertainty visualization. Then, we review
behavioral evidence of using uncertainty visualizations with a focus on when
errors or misunderstandings occur, in order to find commonalities among
these errors.
As a preview, researchers consistently observe errors when a visualization
or task requires a viewer to perform a complex mental computation to accu-
rately interpret the visual information. We propose that a unifying cognitive
process that predicts these errors is increased working memory or cognitive
effort. This chapter reviews research on working memory demand in the
context of visualizations and how working memory as a mental process
can potentially explain many of the errors observed in uncertainty-
visualization use.
ARTICLE IN PRESS
Working
Memory
Conceptual
Question
inference
top-down
attention Visual Instantiated
Visual Array Description Graph Schema
Conceptual
bottom-up Message decision Behavior
attention MATCH message
assembly making
processes
influences
studied capacity limitation errors in the context of how many digits or items in
sequence participants can remember (Miller, 1956). More recent work sug-
gests that we tend to group information (e.g., chunk) rather than maintain the
information separately and that we can remember between three to five
chunks of information (Doumont, 2002). Errors may occur when viewing
uncertainty visualizations if a visualization requires the viewer to maintain
too much information in working memory, essentially surpassing the limited
working memory capacity. As a simple example, imagine a visualization that
maps elements of the data to color, opacity, texture, size, shape, and position.
To interpret the visualization correctly, one must maintain in working mem-
ory how each variable relates to the data. Working memory capacity may be
overloaded if people are asked to do a complex data analysis with such a
working-memory demanding visualization. Capacity limitation errors include
failing to integrate all of the relevant information in a visualization; not being
able to perform a mental computation on a visualization; or failing to main-
tain, switch, or update task goals.
The second category of errors related to working memory encompasses
viewers failing to use working memory when they should. By default, we
tend to make fast and automated decisions that use as little working memory
as possible (Type 1 processing) (Kahneman, 2011; Tversky & Kahneman,
1974). Type 1 processing is an adaptive strategy that we have developed
to minimize effort because effort is metabolically costly. Researchers esti-
mate that our brains account for 20–25% of our resting metabolism
(Leonard & Robertson, 1994). Voluntary effort may not exclusively account
for the mind’s propensity toward Type 1 processing, but a combination of
energy conservation and reserving limited capacity working memory vali-
dates the preference for fast and automated decisions (Kool & Botvinick,
2014). However, some visualizations require the use of working memory
to be understood correctly (Type 2 processing). For example, when viewing
the line chart in Fig. 4 that illustrates the impact of the Stand Your Ground
law on gun deaths in Florida, a viewer might not notice that the Y-axis is
inverted. Without using working memory, the viewer would assume that
the Stand Your Ground law correlated with a drop in gun deaths in
Florida. To interpret this visualization correctly, a viewer needs to activate
working memory to recognize that the Y-axis is inverted and reimagine the
data’s appropriate relationships.
The third type of error related to working memory results from forgetting
relevant information because working memory decays over time. For exam-
ple, if asked to memorize the sequence 9,875,341,890, recalling the numbers
ARTICLE IN PRESS
Fig. 4 Deceptive visualization showing the impact of the 2005 Stand Your Ground law
in Florida and the number of murders from firearms with the Y-axis reversed. This exam-
ple is based on a data visualization that was released to the public by Christine Chan at
Reuters (Pandey, Rall, Satterthwaite, Nov, & Bertini, 2015). Redrawn per CC-BY license from
Padilla, L., Creem-Regehr, S., Hegarty, M., & Stefanucci, J. (2018). Decision making with
visualizations: A cognitive framework across disciplines. Cognitive Research: Principles
and Implications, 3, 29.
after holding them in working memory for 5 s is easier than after 5 min. To
memorize such information and hold it in working memory for prolonged
periods, people generally chunk information, such as (987) 534–1890, and
then mentally rehearse the information. Without rehearsal, our ability to store
information begins to decay after approximately 5–10 s (Cowan, 2017). The
nature of the decay can vary due to the task, type of information, and indi-
vidual capacities (Cowan, Saults, & Nugent, 1997). Longer sequential visual-
ization tasks that require completion of longer-term goals may be error-prone
due to the degradation of working memory over time.
different stories, and each story pointed out different features within the
same visualization. When asked what other people might see as essential
features in the data, participants were more likely to report that other people
would see the information they were primed to think of as relevant (Xiong
et al., 2019).
more general (e.g., What can this visualization tell me about my health?)
or ill-defined (e.g., What am I looking at?). Many times, viewers may have
a sequence of conceptual questions about the visualization, which may
evolve.
In the Padilla, Creem-Regehr, Hegarty, & Stefanucci, 2018 framework,
conceptual questions play a key role as they channel working memory. This
framework suggests that the central executive (i.e., the resource allocation
mechanism in working memory) applies working memory to answer the
conceptual question during visualization reasoning. As a result, the concep-
tual question can:
1. Drive a viewer’s top-down attention to relevant information
2. Guide which graph schemas are selected
3. Frame the conceptual message
4. Influence decisions
The viewer’s specific question can influence all of the processes in this model
except bottom-up attention. These processes can also form feedback loops
or prime a specific graph schema (e.g., Xiong et al., 2019). Based on the con-
ceptual message, a viewer may decide to update the question or goal and
repeat some of the processes. Errors can occur as a result of the conceptual
question if it is unclear to the viewer how to achieve his or her particular
goals. The viewer might ask the wrong question to achieve their goals or
use incorrect steps. Viewers might also have too many goals, which can
be challenging to keep track of and require a significant amount of working
memory to manage.
2.9 Decision-making
Once all the relevant conceptual questions have been answered for the viewer
to feel comfortable making a decision, he or she completes the decision step.
The majority of the widely documented decision-making biases and heuristics
occurs in the decision-making step. This process involves taking the visual
information stored in the mind and using Type 1 or Type 2 processing to reach
a conclusion, usually in order to perform an action (Kahneman, 2011). Type 1
processing is relatively fast, unconscious, and intuitive. Type 2 processing
involves working memory and is slower, more metabolically intensive, and
more contemplative than Type 1 processing (Evans & Stanovich, 2013).
Other models of decision-making characterize these processes differently.
Here we note two processes in line with Evans and Stanovich (2013), one that
requires the activation of working memory to make a decision and another
ARTICLE IN PRESS
process that does not require significant working memory. There exists a mas-
sive body of literature detailing numerous possible decision-making biases that
can occur at this stage. Not all decision-making biases have been generalized
to the context of decisions with visualizations, but many of these biases may
influence reasoning with visualizations. However, more work is needed to
examine if all previously documented decision-making biases generalize to
the context of decision-making with visualizations.
2.10 Behavior
The final stage of the Padilla et al. (2018) model results in action or behavior.
Errors, although not decision-making errors, might occur in this model’s
final stage when people cannot take the action that they have selected.
For example, in hurricane forecasting, people might see a hurricane visual-
ization, decide to evacuate, and then lack the necessary resources to evacuate
or not know the appropriate evacuation route. These phenomena require
exploration in the more applied social sciences and are beyond the scope
of this chapter. However, failures to suppress heavily automated behaviors
(e.g., in the case of addictions) due to reduced cognitive resources or poor
executive control can also be observed during this stage.
Fig. 6 Visualizations that show the uncertainty in two locations, using a gradient or a
bounded circle right, used in McKenzie et al. (2016). Reproduced per CC-BY license, from
Padilla, L., Creem-Regehr, S., Hegarty, M., & Stefanucci, J. (2018). Decision making with
visualizations: A cognitive framework across disciplines. Cognitive Research: Principles
and Implications, 3, 29.
to the cone of uncertainty that did not elicit the containment strategy.
Researchers determined that the edge of the hard boundary elicits the highest
visual salience and likely drives the containment strategy (Padilla et al., 2017).
The misunderstandings associated with delineations occur in one-
dimensional data as well. Delineation errors can be understood as boundaries
creating conceptual categories (Padilla et al., 2021). The boundaries creating con-
ceptual categories error likely contributes to the numerous studies finding that
people misunderstand how to interpret error bars and confidence intervals.
Both well-trained experts in statistics and novices commonly misunderstand
how to interpret statistical significance from frequentist 95% confidence
intervals (e.g., Belia et al., 2005; Hofman, Goldstein, & Hullman, 2020).
Researchers find that even trained experts incorrectly assume that no signif-
icant difference exists between two groups with overlapping intervals (Belia
et al., 2005). When comparing two health treatments with visualized means
and frequentist 95% confidence intervals, participants were more willing to
overpay for treatment and to overestimate the effect size compared to when
the same data were shown with predictive intervals (Hofman et al., 2020).
People tend to believe that error bars contain the distribution of values,
resulting in the mismatch between the visual description and instantiated
graph schema. If the two bars are far apart, the boundaries lead people to
believe that these boundaries contain all the relevant values and therefore
they incorrectly assume a statistically significant difference. A similar effect
has also been found with bar charts. Researchers have demonstrated a
“within the bar bias,” where people believe that data points that fall within
a bar are more likely to be part of a distribution than data points equal dis-
tance from the mean but outside of the bar (Newman & Scholl, 2012).
This boundaries- create-conceptual-categories error likely occurs early in the
decision-making process. As demonstrated in Padilla et al. (2017), bound-
aries make up some of the most salient features in a visualization and can
attract our bottom-up attention. As a result, we might spend more time
looking at the boundaries in a visualization, which can produce an over-
weighting of the boundaries in our conceptualization of the data.
One of the reasons boundaries create conceptual categories is that they
may reinforce Gestalt grouping principles, which are the visual system’s pro-
pensity to group and categorize visual information based on similarities in
properties such as shape, color, physical proximity, and other contextual
information (Wertheimer, 1938). As an illustration, try to determine if pat-
terns are depicted in Fig. 8. All the figure items may seem to be a part of one
global grouping because they are all circular and loosely arranged in a circle.
ARTICLE IN PRESS
Fig. 9 Ambiguous Gestalt grouping example with boarder around the ovular items.
With effort, most viewers notice that some objects are larger or smaller and
others circular or ovular. Identifying patterns becomes easier when bound-
aries are added, as in Fig. 9, which bounds the ovular items with a line.
When the boundaries are included, visually grouping the ovular objects
and noticing they have an upward trend is much easier. The boundary works
to precategorize some of the information for the visual system. Said another
way, the boundaries offload cognition on the visualization by categorizing
the objects before the visual system does. The categorization created by the
boundaries occurs early in the decision-making process and reduces a visual
system processing step. However, a problem arises when a viewer needs to
group different information than what the boundary contains. When viewing
Fig. 9, try to mentally group the smaller objects. Most people can successfully
group the smaller objects and see their trend, but this process requires
ARTICLE IN PRESS
Edgware Euston
Paddington Road Baker St.
Euston
ARTICLE IN PRESS
Baker St. Old Street
Warren
Warren Marylebone St. Farringdon
Moorgate
St. Liverpool
Street
Liverpool St.
Edgware Tottenham Holborn
Road Oxford Court Road
Circus Moorgate
Bank Notting Bond Oxford
Aldgate
Paddington Hill Gate Street Circus Holborn
Leicester
Bond St. Square Tower Hill
Notting Bank
Hill Gate Embankment London
Piccadilly Bridge Aldgate
Green Park Circus Charing Green Park Cannon
Waterloo
Cross Leicester Street
Square
Piccadilly
Circus Monument Tower Hill
High St. Charing
Kensington Cross
Westminster
Gloucester Elephant
Gloucester Blackfriars
Road & Castle
Road
Earl’s
Court
South Victoria Westminster Embankment London Bridge
Kensington Earl’s South Victoria
Court Kensington
Waterloo
The Padilla et al. (2018) schema instantiation process has three steps, as
illustrated in Fig. 11. A viewer must first correctly classify the visualization
type. With standard visualizations (e.g., line or bar charts), accurate classifi-
cation occurs relatively easily. However, errors can arise when ambiguity
exists in classifying the category or type for a visualization.
A famous example of classification involves the London Underground map
by Harry Beck (see an example based on Beck’s innovation in Fig. 10, right).
Beck helped define a new cartographic convention that departed from the his-
torical approach of superimposing subway lines on a geographically accurate
map (Guo, 2011). In Beck’s redesign, he opted to arrange the layout in a dia-
grammatic fashion that focused on improving the legibility of routes, transfers,
and stops, inspired by electrical circuits. Initially, transit officials scoffed at the
design, but it was ultimately adopted in 1933. Some of the apprehension about
Beck’s map began because officials thought that riders might see it as a standard
map, fail to realize that the distances between stops were not based on physical
distance, become confused, and miss their stops. Researchers continue to
discuss whether Beck’s design should be classified as a map or as a diagram
(Cartwright, 2012).
When new innovations change visualization design, viewers might
become confused about how to classify a new type of visualization, which
can affect how they determine and implement an appropriate schema.
Today, Beck’s approach has been utilized worldwide for close to a century,
and most transit riders have developed a specific schema for diagrammatic
subway maps. Beck’s success is likely due in part to the design being different
enough from standard approaches that the design prompted riders to recog-
nize that a standard map-based schema would not work. Additionally, the
design reduced directional information to three axes, reducing the memory
required to match viewers’ destination goals with their visual description.
In the next step of the graph instantiation process, viewers retrieve the
relevant schema based on how they classified the visualization. Errors can
occur in this process when viewers have not learned an appropriate schema.
When no schema is available for a graph type, the viewers might utilize a
schema from a different visualization type or context. For example, see
the new coordinate system in Fig. 12 and try to determine the values for B.
A
(2,3)
B
(?,?)
Fig. 13 Example of a mental rotation needed to apply at the schema for Cartesian coor-
dinate plane to a hypothetical new coordinate plain and derived B.
One strategy is to notice that A and B both have two values and a coor-
dinate plane. Dot plots use similar Cartesian coordinate planes but have dif-
ferent axes than in the example. One could apply the Cartesian coordinate
schema to interpret the new hypothetical coordinate plane and then derive
B’s values, as illustrated in Fig. 13.
The problem with applying the schema for a Cartesian coordinate plane to
the new coordinate plane is that the planes do not adhere to the same graphic
conventions. The angles of the axes in Fig. 14 are not 90°. Applying a schema
for a Cartesian coordinate to the new coordinate plane incorrectly is easy,
as they share similar properties. When the appropriate schema is unknown,
viewers commonly retrieve a different visualization schema to interpret the
new information, which can work out well in some cases or can lead them
to systematic misinterpretations. Graph schemas that viewers can easily
remember and those frequently used are more likely to be applied to an
ambiguous visualization type.
In the final stage of the schema instantiation process, viewers must apply
the schema that they have retrieved to the visualization in order to answer
ARTICLE IN PRESS
A
(2,3)
B
(-2,3)
Fig. 14 Illustration of how the axes for the new coordinate plane are not 90° angles.
the conceptual question. When a mismatch between the schema and the
visualization occurs, as illustrated in the prior example, a transformation is
required to make the two align. Cognitive Fit Theory describes how errors
occur when a mismatch between the schema and the visualization requires
exorbitant mental computations (Vessey, 1991). A large mismatch between
the schema and visualization requires significant working memory to make
the two align, which results in increased errors and time to complete the task
(Padilla et al., 2018). Note that the Padilla et al. (2018) model suggests that
the schema matching process and all other processes (other than bottom-up
attention) are in service of the conceptual question. Even if viewers do not
think they are trying to answer a specific question, they always have a goal,
which could be as simple as understanding what they see.
researchers have found that gradient plots of 1D data can outperform interval
plots of the same information (Correll & Gleicher, 2014). Gradient plots of
1D data require only a single schema for mapping opacity to probability.
Researchers have also documented how ensemble visualizations, which
are the most effective hurricane forecast visualization technique (Ruginski
et al., 2016), can also suffer from schema errors (Padilla et al., 2017;
Padilla, Creem-Regehr, et al., 2020). The approach of this work was to
identify the schema participants use when viewing ensemble visualizations.
Ensemble visualizations have been developed as a technique relatively
recently (Liu et al., 2016), and we can reasonably assume that people have
not developed a specific schema for ensembles.
After reviewing all commonly available visualization techniques,
researchers noted that the ensemble visualization shared many similar prop-
erties to map-based navigation applications (Padilla et al., 2017). Both
map-based travel applications and ensemble hurricane forecasts have a base
map that adheres to standard cartographic principles and overlays of lines.
Researchers speculated that when viewing an ensemble visualization, people
utilize the schema that they have developed for understanding travel appli-
cations (Padilla et al., 2017). An essential benefit to using a travel application
schema is that participants would not have to hold multiple schemas in their
minds (e.g., one for maps and one for uncertainty). The use of a single
schema could be one reason why ensemble visualizations outperform
cone-like hurricane forecasts (Padilla et al., 2017; Ruginski et al., 2016).
However, the problem with using a travel application schema for hur-
ricane ensembles is that the schema could lead to errors in specific cases.
Researchers tested an additional hypothesis that people see each line of
the hurricane forecast ensemble as a specific path the hurricane could take
(Padilla et al., 2017). The schema for geospatial travel visualizations dictates
that the application shows a finite list of possible discrete routes and not a
distribution of routes. Whereas for the ensemble visualization, each line
depicts a subset of a distribution. In other words, the ensemble lines show
the spread of uncertainty in the path of the storm. They do not show an
exhaustive list of every possible path the storm could take. If people use a
schema for geospatial travel applications and one of the ensemble members
intersects a location of interest, they may incorrectly think the likelihood is
higher that the storm will hit that location (Padilla et al., 2017).a
a
Note that researchers provided participants little information about how to interpret the ensembles,
which simulates the conditions in which they would see hurricane forecast in the news (i.e., on average,
hurricane forecasts are shown on TV for 1.52 min; Padilla, Creem-Regehr, et al., 2020; Padilla,
Powell, Kay, & Hullman, 2020).
ARTICLE IN PRESS
Fig. 15 Example ensemble hurricane forecast visualizations with two locations from
Padilla et al. (2017). In each visualization, one location is intercepted by an
ensemble member. Reproduced per CC-BY license, from Padilla, L., Creem-Regehr, S.,
Hegarty, M., & Stefanucci, J. (2018). Decision making with visualizations: A cognitive frame-
work across disciplines. Cognitive Research: Principles and Implications, 3, 29.
ARTICLE IN PRESS
tested if participants could override the graph schema using working mem-
ory to control their overreaction cognitively. Researchers provided partic-
ipants with extensive instructions on interpreting an ensemble visualization
and how to perform the task correctly. Participants with extensive instruc-
tions were able to reduce their bias but not entirely remove it. At the end of
the study, participants who received extensive instructions could report the
correct strategy, but these participants still overreacted in their behavioral
judgments, albeit to a lesser degree (Padilla, Creem-Regehr, et al., 2020).
In summary, ongoing research on hurricane forecast visualizations dem-
onstrates multiple schema-related errors. Errors are highly likely when
working memory demand from a visualization is increased, by maintaining
two schemas or attempting to cognitively override one schema. The major-
ity of geospatial uncertainty visualizations will likely encounter similar errors
because superimposing the uncertainty visualization on the base map will
likely evoke the viewer’s map schema.
Future visualization designers interested in communicating geospatial
uncertainty that does not evoke a traditional cartographic schema could uti-
lize the approach pioneered by Harry Beck in the London Underground
map. One possible reason that the London Underground map does not pro-
duce large schema-based errors is that its differences sufficiently separate the
visualization from a traditional map, which makes people aware that a con-
ventional map schema is not appropriate. If the visualization alerts the viewer
to its novelty, it could trigger the viewer to develop a new schema.