A Structured Approach To Strategic Decisions - Kahneman Et Al - 2019
A Structured Approach To Strategic Decisions - Kahneman Et Al - 2019
Envision the following situations: A board of directors considers acquiring a competitor. A marketing team
decides whether to launch a new product. A venture capital investment committee chooses among an array
of startups to fund.
All those strategic decisions share a common feature: They are evaluative judgments. To make such tough
calls, people must boil down a large amount of complex information to either
(1) numerical scores for competing options or (2) a yes-no decision on whether to choose a specific path. Of
course, some management decisions are made without weighing quite so much information. But strategic
decisions tend to involve the distillation of complexity into a single path forward.
Given how unreliable human judgment is, all evaluations are susceptible to errors. These errors can stem from
known cognitive biases — or they can be random errors, sometimes called “noise.” Unreliability in judgment
has long been recognized and studied, particularly in the context of decision-making about hiring. We draw
inspiration from that body of research and experience to suggest a practical, broadly applicable approach to
reducing errors in strategic decision-making. We call it the Mediating Assessments Protocol (MAP), and we’ll
describe it here, after discussing the underpinning research.
Unfortunately, a vast amount of evidence indicates that unstructured interviews lead to biased evaluations
that have very little predictive value.2 That’s because the interviewer forms a mental model (colloquially
known as an “impression”) of a candidate, a process that psychologists have shown has three specific
limitations:
1. Excessive coherence. Mental models are usually simpler and more coherent than the reality they aim
to assess. As interviewers, if we assume, for instance, that a particular candidate is an extrovert, we tend to
ask questions that confirm this hypothesis.
2. A “quick and sticky” quality. We form our mental models rapidly, often on the basis of limited
evidence at the start of the process, and we alter our models slowly as new facts emerge. That explains why,
as common sense would suggest (and research has confirmed3), first impressions have a disproportionate
effect on the assessments we make of people in general and on the outcome of job interviews.4
3. Biased weighting. Our mental models often don’t give each pertinent fact the weight it deserves. We
may discount important bits of information or, in contrast, give great weight to factors that should be entirely
irrelevant. For example, an interviewer may wrongly perceive that a male candidate has great leadership
qualities just because he is tall and has a deep voice.
Such challenges in hiring are easy to recognize. For that reason, we do not expect all interviewers to agree on
one candidate — and we often compensate by averaging several interviewers’ viewpoints.
What many people do not fully appreciate is that we use the same process of mental model formation in
strategic decision-making, with similar
limitations and results.5
Suppose, for instance, that you’re selecting a location for a new plant. You must assess multiple factors: labor
costs, technical feasibility, regulatory requirements, political stability in various regions, and so on. You
already have a mental image of the candidate countries and cities. As you learn new facts about each
prospective site, the bias toward excessive coherence leads you to confirm that image, which is likely to be
much less nuanced and less ambiguous than the reality.
This bias may be even stronger if a team is collaborating on the decision:6 As a favorite plant site emerges,
its perceived benefits are exaggerated and its perceived costs underestimated. Then, when the team
reviews the merits of possible locations, early impressions formed in the first minutes of the discussion are
likely to weigh heavily on the final decision. Once an impression is formed, you will tend to ask leading
questions that support your early views. Because of confirmation bias,7 you will interpret ambiguous facts in
light of preexisting attitudes. It takes a considerable amount of evidence to reverse an erroneous initial
judgment.
Finally, just as a candidate’s physical appearance may influence a hiring decision, certain attributes of a
manufacturing site may carry undue weight. Predictably, extra weight is often given to recent and salient
information (a case of “availability bias”).8 For example, you may overreact to recent news of political turmoil
or overemphasize shortterm considerations. And given that most of us tend to be optimistic and
overconfident in our forecasts, you may underestimate technical challenges in constructing the new plant.
In short, unstructured decision-making, whether in job interviews or in other, more strategic decisions, is
vulnerable to both bias and “noise.” The presence of noise explains why researchers find large variations
whenever they systematically investigate the reliability of judgment.9
Core Elements of
Structured Decisions
MAP is a structured approach to grounding strategic decisions, like structured interviews, on mediating
assessments. MAP has three core elements:
• Define the assessments in advance. The decision maker must identify a handful of mediating
assessments, key attributes that are critical to the evaluation. In the decision to acquire a company, for
example, the assessments could include anticipated revenue synergies or qualifications of the management
team. This process is similar to one a hiring committee would follow when creating a job description that
outlines attributes required for success in the position.
• Use fact-based, independently made assessments. People who weigh in on one aspect of a strategic
option should not be influenced by one another — or by other dimensions of the option. Their opinions should
be grounded in the evidence available. This approach is comparable to a wellorganized structured-interview
process, in which job seekers are scored on each key attribute solely on the basis of their answers to relevant
questions, calibrated using predefined scales.
• Make the final evaluation when the mediating assessments are complete. Unless a deal- breaker fact
is uncovered (for instance, evidence of accounting fraud at the acquisition target), the final decision should
be discussed only when all key attributes have been scored and a complete profile of assessments is available.
This is similar to having a hiring committee review all the evaluations made by each interviewer on each key
requirement of the job description before making a decision on a candidate.
The use of mediating assessments reduces variability in decision-making because it seeks to address the
limitations of mental model formation, even though it cannot eliminate them entirely. By delineating the
assessments clearly and in a factbased, independent manner, and delaying final judgment until all
assessments are finished, MAP tempers the effects of bias and increases the transparency of the process, as
all the assessments are presented at one time to all decision makers. For example, because salient or recent
pieces of information are not given undue weight, the process preempts the availability bias. MAP also
reduces the risk that a solution will be judged by its similarity with known categories or stereotypes (an error
arising from the representativeness bias13). When differentiated, independent facts are clearly laid out, logical
errors are less likely.
Some decision makers will have an initial dislike for MAP, just as many recruiters still resist structured
interviewing. The requirements may appear mechanical, and the limits it places on the role of intuition will
not appeal to leaders who have been rewarded for “trusting their gut.” Structured decision- making, based
on mediating assessments, will be adopted only if it is viewed as offering a substantial improvement in
decision-making quality.
Accordingly, we next examine MAP’s application and benefits in two types of strategy decisions: large one-
off decisions made by teams of executives or directors, and recurrent decisions made as part of formalized
processes that, in aggregate, shape a company’s strategy.
VC Co.’s early experience using MAP did not slow down the firm’s decision-making. On the contrary, members
of the firm have told us that the orderly flow of work in the new protocol actually saves time, and they have
pointed to several investments where they believe it has led them to a distinctly better decision. Quantitative
effects will take years to assess.
How should decision makers express their assessments? Standard rating scales with verbal anchors (“very
good,” “good,” and so on) have the advantage of simplicity. However, the ambiguity of verbal labels is a major
source of noise, because different people use different words to convey the same underlying judgment.
Percentile scales offer a possible solution. Many leading companies already use them to express judgments
about the performance of employees on multiple dimensions. For example: “Jenny is in the top 10% of the
population of junior executives for
raw intellect, but in the third quartile for interpersonal skills.” We recommend extending the use of percentile
scales to assessments in other domains. When evaluating the quality of a target company’s go-to-market
capabilities, for instance, a venture capital firm can ask: “How does this company compare with other
companies in the same sector? Is it in the top 10%? In the top 25%?” The percentile scale requires a specified
frame of reference, understood and shared by all. In this example, “all the potential targets we have
evaluated” would be the most appropriate reference class.
Percentile scales have several advantages over other types of rating scales. First, they require the evaluator
to bring comparable cases to mind and to think of the case at hand as one particular instance of a broader
category. This approach, which has been called the outside view, is a powerful debiasing technique by itself.i
Second, percentile scales allow individual biases to be detected and corrected. For example, an overly lenient
or optimistic person who rates 40% of cases as belonging to the top 10% will eventually be identified and
trained to use the scale appropriately. When people
have learned to use a percentile scale well, they are said to be calibrated. Improving calibration is another
major step in reducing noise in judgments.
Third, percentile ratings can easily be translated into policy. If underwriters, for instance, rate risks in
percentiles, premiums can be priced on the basis of their ratings, and the company can decide at what
percentile in the distribution of risks it sets the limit of what it is willing to underwrite.
Percentile scales can be challenging to define, introduce, and administer. But the accuracy gains they bring
are worth the effort.
Whatever else it produces, any organization is a decision factory. Some of its decisions are made by people
following clear rules. But many of the decisions that shape the future of organizations require time-consuming
deliberation, analysis, and the balancing of multiple considerations. Such decisions cannot easily be “quality
checked.” To improve them, we must work on the processes by which they are made.
MAP is one way of doing so. By adding discipline to decision-making and limiting some well-known flaws, it
brings quality assurance to complex decisions. While other decision support approaches, such as decision
theory or advanced analytical models, share the same objective, MAP has some advantages. It is easy to learn,
involves a minimal amount of additional work, and leaves senior executives some freedom to exercise
intuitive judgment, albeit after a useful delay. As such, it should be a valuable tool for any leader who aims to
raise the quality of an organization’s decisions.
REFERENCES
1. J. Levashina, C.J. Hartwell, F.P. Morgeson, and M.A. Campion, “The Structured Employment
Interview: Narrative and Quantitative Review of the Research Literature,” Personnel Psychology 67, no. 1
(February 2014): 241-293.
2. J. Dana, R. Dawes, and N. Peterson, “Belief in the
Unstructured Interview: The Persistence of an Illusion,” Judgment and Decision-Making 8, no. 5 (September
2013): 512-520; and D.A. Moore, “How to Improve the Accuracy and Reduce the Cost of Personnel
Selection,” California Management Review 60, no. 1 (November 2017): 8-17.
3. M.R. Barrick, S.L. Dustin, T.L. Giluk, G.L. Stewart, et al., “Candidate Characteristics Driving Initial
Impressions
During Rapport Building: Implications for Employment Interview Validity,” Journal of Occupational and
Organizational Psychology 85, no. 2 (June 2012): 330-352.
4. C.Y. Olivola and A. Todorov, “Fooled by First Impressions? Reexamining the Diagnostic Value of
Appearance- Based Inferences,” Journal of Experimental Social Psychology 46, no. 2 (March 2010): 315-324.
5. D. Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011).
6. C.R. Sunstein and R. Hastie, Wiser: Getting Beyond Groupthink to Make Groups Smarter (Boston:
Harvard Business Review Press, 2015).
7. R.S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General
Psychology 2, no. 2 (June 1998): 175-220.
8. A. Tversky and D. Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185,
no. 4157 (Sept. 27, 1974): 1,124-1,131.
9. D. Kahneman, A.M. Rosenfield, L. Gandhi, and T. Blaser, “Noise: How to Overcome the High, Hidden
Cost of Inconsistent Decision-Making,” Harvard Business Review 94, no. 10 (October 2016): 36-43.
10. Levashina et al., “The Structured Employment Interview.”
11. Kahneman, Thinking, Fast and Slow.
12. L. Bock, Work Rules! Insights From Inside Google That Will Transform How You Live and Lead
(London: Hachette U.K., 2015).
13. Tversky and Kahneman, “Judgment Under Uncertainty.”
14. Kahneman, Thinking, Fast and Slow.
15. O. Sibony, D. Lovallo, and T.C. Powell, “Behavioral Strategy and the Strategic Decision Architecture of
the Firm,” California Management Review 59, no. 3 (May 2017): 5-21.
16. Kahneman et al., “Noise.”
i. D. Lovallo and D. Kahneman, “Delusions of Success: How Optimism Undermines Executives’ Decisions,”
Harvard Business Review 81, no. 7 (July 2003): 56-63.