0% found this document useful (0 votes)
18 views24 pages

DLT-Superminds

The article discusses the importance of designing human-machine collaboration, termed 'superminds', to enhance the future of work rather than viewing AI as a substitute for human labor. It emphasizes that AI should augment human capabilities and that effective collaboration can lead to greater value creation and more meaningful work for employees. The authors argue for a shift in perspective towards a more integrated approach that leverages the strengths of both humans and machines.

Uploaded by

sn4xus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views24 pages

DLT-Superminds

The article discusses the importance of designing human-machine collaboration, termed 'superminds', to enhance the future of work rather than viewing AI as a substitute for human labor. It emphasizes that AI should augment human capabilities and that effective collaboration can lead to greater value creation and more meaningful work for employees. The authors argue for a shift in perspective towards a more integrated approach that leverages the strengths of both humans and machines.

Uploaded by

sn4xus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

COMPLIMENTARY ARTICLE REPRINT

ISSUE 27, J ULY 2020

Superminds, not substitutes:


Designing human-machine collaboration
for a better future of work
JAMES GUSZCZA AND JEFF SCHWARTZ WITH A NOTE BY JOE UCUZOGLU, CEO, DELOITTE US
ILLUSTRATION BY DAVID VOGIN

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee, and its network of
member firms, each of which is a legally separate and independent entity. Please see http://www.deloitte.com/about for a detailed
description of the legal structure of Deloitte Touche Tohmatsu Limited and its member firms. Please see http://www.deloitte.com/
us/about for a detailed description of the legal structure of the US member firms of Deloitte Touche Tohmatsu Limited and their
respective subsidiaries. Certain services may not be available to attest clients under the rules and regulations of public accounting.
For information on the Deloitte US Firms’ privacy practices, see the US Privacy Notice on Deloitte.com.

Copyright © 2020. Deloitte Development LLC. All rights reserved.


26 FUTURE OF WORK
Superminds, not substitutes 27

FUTURE OF WORK

Superminds,
not substitutes
DESIGNING HUMAN-MACHINE
COLLABORATION FOR A
BETTER FUTURE OF WORK

JAMES GUSZCZA AND JEFF SCHWARTZ


WITH A NOTE BY JOE UCUZOGLU,
CEO, DELOITTE US
A RT B Y D AVI D VO G I N

www.deloitte.com/deloitte-review
28 FUTURE OF WORK

“AI systems will need to be smart and to be


good teammates.”
– Barbara Grosz1

A
RTIFICIAL INTELLIGENCE (AI) is one of the People are looking to achieve very big
signature issues of our time, but also one of numbers … Earlier, they had incremental,
the most easily misinterpreted. The prom- 5 to 10 percent goals in reducing their
inent computer scientist Andrew Ng’s slogan “AI is workforce. Now they’re saying, “Why can’t
the new electricity”2 signals that AI is likely to be we do it with 1 percent of the people we
an economic blockbuster—a general-purpose have?”5
technology3 with the potential to reshape business
and societal landscapes alike. Ng states: But as the personal computing pioneer Alan Kay
famously remarked, “A change in perspective is
Just as electricity transformed almost worth 80 IQ points.” This is especially true of
everything 100 years ago, today I actually discussions of the roles of humans and machines in
have a hard time thinking of an industry the future of work. Making the most of human and
that I don’t think AI will transform in the machine capabilities will require moving beyond
next several years.4 received wisdom about both the nature of work
and the capabilities of real-world AI.
Such provocative statements naturally prompt the
question: How will AI technologies change the role The zero-sum conception of jobs as fixed bundles
of humans in the workplaces of the future? of tasks, many of which will increasingly be
performed by machines, limits one’s ability to
An implicit assumption shaping many discussions reimagine jobs in ways that create new forms of
of this topic might be called the “substitution” value and meaning.6 And framing AI as a kind of
view: namely, that AI and other technologies will technology that imitates human cognition makes it
perform a continually expanding set of tasks better easy to be misled by exaggerated claims about the
and more cheaply than humans, while humans will ability of machines to replace humans.
remain employed to perform those tasks at which
machines cannot (yet) excel. This view comports We believe that a change in perspective about AI’s
with the economic goal of achieving scalable role in work is long overdue. Human and machine
efficiency. capabilities are most productively harnessed by
designing systems in which humans and machines
The seductiveness of this received wisdom was put function collaboratively in ways that complement
into sharp relief by this account of a prevailing each other’s strengths and counterbalance each
attitude at the 2019 World Economic Forum other’s limitations. Following MIT’s Thomas
in Davos: Malone, a pioneer in the study of collective
intelligence, we call such hybrid human-machine
systems superminds.7
Superminds, not substitutes 29

The change in perspective from AI as a human zeitgeist of the 1950s and the subsequent decades
substitute to an enabler of human-machine of science-fiction movies that it inspired.9
superminds has fundamental implications for how
organizations should best harness
AI technologies:
Human and machine capabilities
• Rather than focusing primarily are most productively harnessed by
on the ability of computer
technologies to automate tasks,
designing systems in which humans
we would do well to explore and machines function collaboratively
in ways that complement each
their abilities to augment
human capabilities.
other’s strengths and counterbalance
• Rather than adopt a purely
technological view of deploying each other’s limitations.
technologies, we can cultivate a
broader view of designing systems of human- Second, the post-COVID world is likely to see
computer collaboration. Malone calls this increasing calls for new social contracts and
approach “supermind-centered design.”8 institutional arrangements of the sort articulated
by the Business Roundtable in August 2019.10 In
• Rather than approach AI only as a technology addition to being more scientifically grounded, a
for reducing costs, we can also consider its human-centered approach to AI in the future of
potential for achieving the mutually reinforcing work will better comport with the societal realities
goals of creating more value for customers and of the post-COVID world. A recent essay by New
work that provides more meaning America chief executive Anne-Marie Slaughter
for employees. conveys today’s moment of opportunity:

Compared with the economic logic of scalable The coronavirus, and its economic and
growth, the superminds view may strike some as social fallout, is a time machine to the
Pollyannaish wishful thinking. Yet it is anything future. Changes that many of us predicted
but. Two complementary points—one scientific, would happen over decades are instead
one societal—are worth keeping in mind. taking place in the span of weeks. The
future of work is here [and it’s] an opport-
First, the superminds view is based on a unity to make the changes we knew we
contemporary, rather than decades-old, scientific were going to have to make eventually.11
understanding of the comparative strengths and
limitations of human and machine intelligence. In To start, let us ground the discussion in the
contrast, much AI-related thought leadership in relevant lessons of both computer and
the business press has arguably been influenced cognitive science.
by an understanding of AI rooted in the scientific

www.deloitte.com/deloitte-review
30 FUTURE OF WORK

WHY DEEP LEARNING IS DIFFERENT FROM DEEP UNDERSTANDING


The view that AI will eventually be able to replace people reflects the aspiration—explicitly articulated
by the field’s founders in the 1950s—to implement human cognition in machine form.12 Since then,
it has become common for major AI milestones to be framed as machine intelligence taking another
step on a path to achieving full human intelligence. For example, the chess grandmaster Garry
Kasparov’s defeat by IBM’s Deep Blue computer was popularly discussed as “the brain’s last stand.”13
In the midst of his defeat by IBM Watson, the Jeopardy quiz show champion Ken Jennings joked, “I for
one welcome my new computer overlords.”14 More recently, a Financial Times profile of DeepMind
CEO Demis Hassabis, published shortly after AlphaGo’s defeat of Go champion Lee Sedol, stated: “At
DeepMind, engineers have created programs based on neural networks, modeled on the human
brain … The intelligence is general, not specific. This AI ‘thinks’ like humans do.”15

But the truth is considerably more prosaic than this decades-old narrative suggests. It is indeed
true that powerful machine learning techniques such as deep learning neural networks and
reinforcement learning are inspired by brain and cognitive science. But it does not follow that the
resulting AI technologies understand or think in humanlike ways.

So-called “second wave” AI applications essentially result from large-scale statistical inference on
massive data sets. This makes them powerful—and often economically game-changing—tools for
performing narrow tasks in sufficiently controlled environments. But such AIs possess no common
sense, conceptual understanding, awareness of other minds, notions of cause and effect, or intuitive
understanding of physics.

What’s more, and even more crucially, these AI applications are reliable and trustworthy only to the
extent that they are trained on data that adequately represents the scenarios in which they are to be
deployed. If the data is insufficient or the world has changed in relevant ways, the technology cannot
necessarily be trusted. For example, a machine translation algorithm would need to be exposed to
many human-translated examples of a new bit of slang to hopefully get it right.16 Similarly, a facial
recognition algorithm trained only on images of light-skinned faces might fail to recognize dark-
skinned individuals at all.17

In contrast, human intelligence is characterized by the ability to learn concepts from few examples,
enabling them to function in unfamiliar or rapidly changing environments—essentially the opposite
of brute-force pattern recognition learned from massive volumes of (human-)curated data. Think
of the human ability to rapidly learn new slang words, work in physical environments that aren’t
standardized, or navigate cars through unfamiliar surroundings. Even more telling is a toddler’s
ability to learn language from a relative handful of examples.18 In each case, human intelligence
succeeds where today’s “second wave” AI fails because it relies on concepts, hypothesis formation,
and causal understanding rather than pattern-matching against massive historical data sets.

It is therefore best to view AI technologies as focused, narrow applications that do not possess the
flexibility of human thought. Such technologies will increasingly yield economic efficiencies, business
innovations, and improved lives. Yet the old idea that “general” AI would mimic human cognition has,
in practice, given way to today’s multitude of practical, narrow AIs that operate very differently from
the human mind. Their ability to generally replace human workers is far from clear.
Superminds, not substitutes 31

Why humans are underrated thinkers. Heterogeneous teams outperform


homogenous ones at solving problems, making
A key theme that has emerged from decades of predictions, and innovating solutions.22 The
work in AI and cognitive science serves as a useful heterogeneity of human and machine intelligences
touchstone for evaluating the relative strengths motivates the search for “diversity bonuses”
and limitations of human and computer resulting from well-designed teams of human and
capabilities in various future of work scenarios. machine collaborators.
This theme is known as “the AI paradox.” 19

It is hardly news that it is often comparatively


Difficult problems are easy, and
easy to automate a multitude of tasks that the easy problems are difficult.
humans find difficult, such as memorizing
facts and recalling information, accurately and A “twist ending” to an AI breakthrough typically
consistently weighing risk factors, rapidly used to illustrate the substitution view—Deep
performing repetitive tasks, proving theorems, Blue’s defeat of the chess grandmaster Garry
performing statistical procedures, or playing chess Kasparov—vividly illustrates the largely untapped
and Go. What’s seemingly paradoxical is that the potential of the human-machine superminds
inverse also holds true: Things that come naturally approach. After his defeat, Kasparov helped create
to most people—using common sense, under- a new game called “advanced chess” in which
standing context, navigating unfamiliar landscapes, teams of humans using computer chess programs
manipulating objects in uncontrolled environ- competed against other such teams. In 2005, a
ments, picking up slang, understanding human global advanced chess tournament called “freestyle
sentiment and emotions—are often the hardest to chess” attracted grandmaster players using some of
implement in machines. the most powerful computers of the time. The
competition ended in an upset victory: Two
The renowned Berkeley cognitive scientist Alison amateur chess players using three ordinary laptops,
Gopnik states, “It turns out to be much easier to each running a different chess program, beat their
simulate the reasoning of a highly trained adult grandmaster opponents using supercomputers.
expert than to mimic the ordinary learning of every
baby.”20 The Harvard cognitive scientist Steven Writing in 2010, Kasparov commented that the
Pinker comments that the main lesson from winners’ “skill at manipulating and ‘coaching’
decades of AI research is that “Difficult problems their computers to look very deeply into positions
are easy, and the easy problems are difficult.”21 effectively counteracted the superior chess
understanding of their grandmaster opponents
Far from being substitutes for each other, human and the greater computational power of other
and machine intelligence therefore turn out to be participants.” He went on to state what has come
fundamentally complementary in nature. This to be known as “Kasparov’s Law”:
basic observation turns the substitution view of AI
on its head. In organizational psychology, what Weak human + machine + better process
Scott Page calls a “diversity bonus” results from was superior to a strong computer alone
forming teams composed of different kinds of and, more remarkably, superior to a strong

www.deloitte.com/deloitte-review
32 FUTURE OF WORK

human + machine + inferior process … jobs will increasingly go by the wayside as chatbots
Human strategic guidance combined with become more sophisticated.
the tactical acuity of a computer was
overwhelming.23 While we do not hazard a prediction of what will
happen, we believe that call centers offer an
In Thomas Malone’s vernacular, the system of two excellent example of the surplus value, as well as
human players and three computer chess programs more intrinsically meaningful work, that can be
formed a human-computer collective intelligence— enabled by the superminds approach. In this
a supermind—that proved more powerful than approach, chatbots and other AI tools function as
competing group intelligences boasting stronger assistants to humans who increasingly function as
human and machine components, but inferior problem-solvers. Chatbots offer uniformity and
supermind design. speed while handling massive volumes of routine
queries (“Is my flight on time?”) without getting
Though widespread, such phenomena are often sick, tired, or burned out. In contrast, humans
hidden in plain sight and obscured by the possess the common sense, humor, empathy,
substitution view of AI. Nonetheless, the evidence and contextual awareness needed to handle lower
is steadily gathering that smart technologies are volumes of less routine or more open-ended tasks
most effective and trustworthy when deployed in at which machines flounder (“My flight was
the context of well-designed systems of human- canceled and I’m desperate. What do I do now?”).
machine collaboration. In addition, algorithms can further assist human
agents by summarizing previous interactions,
We illustrate different modes of collaboration —24
suggesting potential solutions, or identifying
and the various types of superminds that otherwise hidden customer needs.
result—though a sequence of case studies below.
This logic has recently been employed by a major
health care provider to better deal with the COVID
The best way to predict the crisis. A chatbot presents patients with a sequence
future of work is to invent it of questions from the US Centers for Disease
Control and Prevention and in-house experts.
The AI bot alleviates high volumes of hotline traffic,
CHATBOTS AND CUSTOMER SERVICE thereby enabling stretched health care workers to
Call center operators handle billions of customer better focus on the most pressing cases.26
requests per year—changing flights, refunding
purchases, reviewing insurance claims, and so on. If this is done well, customers can benefit from
To handle the flood of queries, organizations more efficient, personalized service, while call
commonly implement chatbots to handle simple center operators have the opportunity to perform
queries and escalate more complex ones to less repetitive, more meaningful work involving
human agents. A common refrain, echoing the
25
problem-solving, engaging with the customer, and
substitution view, is that human call center surfacing new opportunities. In contrast, relying
operators remain employed to handle tasks beyond excessively on virtual agents that are devoid of
the capabilities of today’s chatbots, but that these common sense, contextual awareness, genuine
Superminds, not substitutes 33

empathy, or the ability to handle unexpected Bloomberg report told of a company that hired
situations (consider the massive number of more call center operators to handle the increased
unexpected situations created by the COVID crisis) volume of complex customer queries after its sales
poses the risk of alienating customers. went up thanks to the introduction of chatbots.28

Even if one grants the desirability of this A further point is that the introduction of new
“superminds” scenario, however, will AI technologies can give rise to entirely new job
technologies not inevitably decrease the number of categories. In the case of call centers, chatbot
such human jobs? Perhaps surprisingly, this is not designers write and continually revise the scripts
a foregone conclusion. To illustrate, recall what that the chatbots use to handle routine customer
happened to the demand for bank tellers after the interactions.29
introduction of automated teller machines (ATMs).
Intuitively, one might think that ATMs drama- This is not to minimize the threat of technological
tically reduced the need for human tellers. But the unemployment in a field that employs millions of
demand for tellers in fact increased after the people. We point out only that using technology to
introduction of ATMs: The technology made it automate simple tasks need not inevitably lead to
economical for banks to open numerous smaller unemployment. As the ATM story illustrates,
branches, each staffed with human tellers characteristically human skills can become more
operating in more high-value customer service, valuable when the introduction of a technology
less transactional roles.27 Analogously, a recent increases the number of nonautomatable tasks.

Characteristically human skills can


become more valuable when the
introduction of a technology increases
the number of nonautomatable tasks.

www.deloitte.com/deloitte-review
34 FUTURE OF WORK

RADIOLOGISTS AND “DEEP MEDICINE” or fatigue. Unlike Ng and Hinton, LeCun does not
Radiology is another field commonly assumed to anticipate a reduction in the demand for
be threatened by technological unemployment. radiologists.35
Much of radiology involves interpreting medical
images—a task at which deep learning algorithms Using AI to automate voluminous and error-prone
excel. It is therefore natural to anticipate that much tasks so that doctors can spend more time
of the work currently done by radiologists will be providing personalized, high-value care to patients
displaced. In a 2017 tweet publicizing a recent
30
is the central theme of Topol’s book. In the specific
paper, Andrew Ng asked, “Should radiologists be case of radiologists, Topol anticipates that these
worried about their jobs? Breaking news: We can value-adding tasks will include explaining prob-
now diagnose pneumonia from chest X-rays better abilistic outputs of algorithms both to patients and
than radiologists.”31 A year earlier, the deep to other medical professionals. For Topol, the
learning pioneer Geoffrey Hinton declared that “renaissance radiologists” of the future will act less
it’s “quite obvious that we should stop training as technicians and more as “real doctors” (Topol’s
radiologists.”32 phrase), and also serve as “master explainers” who
display the solid grasp of data science and
But further reflection reveals a “superminds” logic statistical thinking needed to effectively
strikingly analogous to the scenario just discussed communicate risks and results to patients.
in the very different realm of call centers. In his
recent book Deep Medicine, Eric Topol quotes a This value-adding scenario, closely analogous to
number of experts who discuss radiology the chatbot and ATM scenarios, involves the
algorithms as assistants to expert radiologists. 33
deployment of algorithms as physician assistants.
The Penn Medicine radiology professor Nick Bryan But other human-machine arrangements are
predicts that “within 10 years, no medical imaging possible. A recent study combined human and
study will be reviewed by a radiologist until it has algorithmic diagnoses using a “swarm”
been pre-analyzed by a machine.” Writing with tool that mimics the collective
Michael Recht, Bryan states that: intelligence of animals such as
honeybees in a swarm.
We believe that machine learning and AI
will enhance both the value and the
professional satisfaction of radiologists by
allowing us to spend more time performing
functions that add value and influence
patient care, and less time doing rote tasks
that we neither enjoy nor perform as well
as machines.34

The deep learning pioneer Yann LeCun articulates


a consistent idea, stating that algorithms can
automate simple cases and enable radiologists to
avoid errors that arise from boredom, inattention,
Superminds, not substitutes 35

(Previous studies have suggested that honeybee Algorithms are increasingly used to improve
swarms make decisions through a process that is economically or societally weighty decisions in
similar to neurological brains.36) The investigators such domains as hiring, lending, insurance
found that the hybrid human-machine system— underwriting, jurisprudence, and public sector
which teamed 13 radiologists with two deep casework. Similar to the widespread suggestion of
learning AI algorithms—outperformed both the algorithms threatening to put radiologists out of
radiologists and the AIs making diagnoses in work, the use of algorithms to improve expert
isolation. Paraphrasing Kasparov’s law, humans + decision-making is often framed as an application
machines + a better process of working together of machine learning to automate decisions.
(the swarm intelligence tool) outperforms the
inferior process of either humans or machines In fact, the use of data to improve decisions has as
working alone. 37
much to do with human psychology and ethics as it
does statistics and computer science. Once again, it
MACHINE PREDICTIONS AND pays to remember the AI paradox and consider the
HUMAN DECISIONS relative strengths and weaknesses of human and
Using the mechanism of swarm intelligence to machine intelligence.
create a human-machine collective intelligence
possesses the thought-provoking appeal of good The systematic shortcomings of human decision-
science fiction. But more straightforward forms of making—and corresponding relative strengths in
human-machine partnerships for making better algorithmic prediction—have been much discussed
judgments and decisions have been around for in recent years thanks to the pioneering work of
decades—and will become increasingly important Simon’s behavioral science successors Daniel
in the future. The AI pioneer and proto-behavioral Kahneman and Amos Tversky. Two major sorts of
economist Herbert Simon wrote that “decision- errors plague human decisions:
making is the heart of administration.”38
Understanding the future of • Bias. Herbert Simon was awarded the Nobel
work therefore requires Prize in economics partly for his realization that
understanding the human-bounded cognition is such that we must
future of decisions. rely on heuristics (mental rules of thumb) to
quickly make decisions without getting bogged
down in analysis paralysis. Kahneman, Tversky,
and their collaborators and successors
demonstrated that these heuristics are often
systematically biased. For example, we confuse
ease of imagining a scenario with the likelihood
of its happening (the availability heuristic); we
cherry-pick evidence that comports with our
prior beliefs (confirmation bias) or emotional
attachments (the affect heuristic); we ascribe
unrelated capabilities to people who possess
specific traits we admire (the halo effect); and

www.deloitte.com/deloitte-review
36 FUTURE OF WORK

we make decisions based on stereotypes rather health services might be biased against minorities
than careful assessments of evidence (the who have historically lacked access to health care.42
representativeness heuristic). And another
bias—overconfidence bias—ironically blinds us As a result, the topic of machine predictions and
to such shortcomings. human decisions is often implicitly framed as a
debate between AI boosters arguing for the
• Noise. Completely extraneous factors, such as superiority of algorithmic to human intelligence
becoming tired or distracted, routinely affect on the one side, and AI skeptics warning of
decisions. For example, when shown the same “weapons of math destruction” on the other.
biopsy results twice, pathologists produced Adopting a superminds rather than a substitution
severity assessments that were only 0.61 approach can help people move beyond such
correlated. (Perfect consistency would result unproductive debates.
in a correlation of 1.0.)39 Or simply

Unlike humans, algorithms can


think about whether you’d prefer to
be interviewed or considered for
promotion at the end of the day after a make limitless predictions or
very strong job candidate, or closer to
the beginning of the day after a very recommendations without getting
weak candidate.
tired or distracted by unrelated
Regarding noise, algorithms have a clear factors.
advantage. Unlike humans, algorithms
can make limitless predictions or One of us (Jim Guszcza) has learned from
recommendations without getting tired or firsthand experience how predictive algorithms
distracted by unrelated factors. Indeed, can be productively used as inputs into, not
Kahneman—who is currently writing a book about replacements for, human decisions. Many years
noise—suggests that noise might be a more serious ago, Deloitte’s Data Science practice pioneered
culprit than bias in causing decision traps, and the application of predictive algorithms to help
views this as a major argument in favor of insurance underwriters better select business
algorithmic decision-making. 40
insurance risks (for example, workers’
compensation or commercial general liability
Bias is the more subtle issue. It is well known that insurance) and price the necessary contracts.
training predictive algorithms on data sets that
reflect human or societal biases can encode, and Crucially, the predictive algorithms were designed
potentially amplify, those biases. For example, to meet the end-user underwriters halfway, and
using historical data to build an algorithm to the underwriters were also properly trained so
predict who should be made a job offer might well that they could meet the algorithms halfway.
be biased against females or minorities if past Black-box machine learning models were typically
decisions reflected such biases.41 Analogously, an used only as interim data exploration tools or
algorithm used to target health care “super- benchmarks for the more interpretable and easily
utilizers” in order to offer preventative concierge documented linear models that were usually put
Superminds, not substitutes 37

into production. Furthermore, algorithmic algorithms in such scenarios also has ethical
outputs were complemented with natural implications. Unlike human decisions, machine
language messages designed to explain to the end predictions are consistent over time, and the
user “why” the algorithmic prediction was what it statistical assumptions and ethical judgments
was for a specific case.43 These are all aspects of made in algorithm design can be clearly
what might be called a “human-centered design” documented. Machine predictions can therefore
approach to AI.44 be systematically audited, debated, and improved
in ways that human decisions cannot.46
In addition, the end users were
given clear training to help them
understand when to trust a machine
Human judgment is more important
prediction, when to complement it than ever to assess the adequacy of
with other information, and when to
ignore it altogether. After all, an algorithms trained on historical data
algorithm can only weigh the inputs
presented to it. It cannot judge the
that might be unrepresentative of
accuracy or completeness of those the future.
inputs in any specific case. Nor can it
use common sense to evaluate context and judge Indeed, the distinguished behavioral economist
how, or if, the prediction should inform the Sendhil Mullainathan points out that the
ultimate decision. applications in which people worry most about
algorithmic bias are also the very situations in which
Such considerations, often buried by discussions algorithms—if properly constructed, implemented,
that emphasize big data and the latest machine and audited—also have the greatest potential to
learning methods, become all the more pressing reduce the effects of implicit human biases.47
in the wake of such world-altering events as the
COVID crisis.45 In such times, human judgment is The above account provides a way of
more important than ever to assess the adequacy of understanding the increasingly popular “human-
algorithms trained on historical data that might be centered AI” tagline: Algorithms are designed not
unrepresentative of the future. Recall that, unlike to replace people but rather to extend their
humans, algorithms possess neither the common capabilities. Just as eyeglasses help myopic eyes
sense nor the conceptual understanding needed to see better, algorithms can be designed to help
handle unfamiliar environments, edge cases, biased and bounded human minds make better
ethical considerations, or changing situations. judgments and decisions. This is achieved through
a blend of statistics and human-centered design.
Another point is ethical in nature. Most people The goal is not merely to optimize an algorithm in
simply would not want to see decisions in certain a technical statistical sense, but rather to optimize
domains—such as hiring, university admissions, (in a broader sense) a system of humans working
public sector caseworker decisions, or judicial with algorithms.48 In Malone’s vernacular, this is
decisions—meted out by machines incapable of “supermind design thinking.”
judgment. Yet at the same time, electing not to use

www.deloitte.com/deloitte-review
38 FUTURE OF WORK

CAREGIVING suggests that such skills are unlikely to be


New America’s Anne-Marie Slaughter comments: implemented in machine form anytime soon.

Many of the jobs of the future should also Even so, AI can perhaps play a role in composing
be in caregiving, broadly defined to include purely human superminds such as the one
not only the physical care of the very old Gawande describes. In Gawande’s example, the
and very young, but also education, value wasn’t created by generally “human” contact,
coaching, mentoring, and advising. [The but rather by the sympathetic engagement of a
COVID] crisis is a reminder of just how specific human—in this case one, with a similar
indispensable these workers are.49 language and cultural background. AI algorithms
have long been used to match friends and romantic
In a well-known essay about health coaches, the partners based on cultural and attitudinal
prominent medical researcher and author Atul similarities. Such matching could also be explored
Gawande provides an illuminating example of to improve the quality of various forms of caregiving
Slaughter’s point. Gawande describes the impact in fields such as health care, education, customer
of a health coach (Jayshree) working with a patient service, insurance claim adjusting, personal finance,
(Vibha) with multiple serious comorbidities and a and public sector casework.51 This illustrates
poor track record of improving her diet, exercise, another facet of Malone’s superminds concept:
and medical compliance behaviors: Algorithms can serve not only as human
collaborators, but also as human connectors.
“I didn’t think I would live this long,” Vibha
said through [her husband] Bharat, who
translated her Gujarati for me. “I didn’t Start with why
want to live.” I asked her what had made
her better. The couple credited exercise, As Neils Bohr and Yogi Berra each said, it is
dietary changes, medication adjustments, very hard to predict—especially about the future.
and strict monitoring of her diabetes. This essay is not a series of predictions, but a
But surely she had been encouraged to call to action. Realizing the full benefits of AI
do these things after her first two heart technologies will require graduating from a
attacks. What made the difference this narrow “substitution” focus on automating tasks
time? “Jayshree,” Vibha said, naming the to a broader “superminds” focus on designing and
health coach from Dunkin’ Donuts, who operationalizing systems of human-machine
also speaks Gujarati. “Jayshree pushes her, collaboration.
and she listens to her only and not to me,”
Bharat said. “Why do you listen to The superminds view has important implications
Jayshree?” I asked Vibha. “Because she for workers, business leaders, and societies.
talks like my mother,” she said.50 Workers and leaders alike must remember that
jobs are not mere bundles of skills, and nor are
The skills of caregivers such as Jayshree are at the they static. They can and should be creatively
opposite end of the pay and education spectra from reimagined to make the most of new technologies
such fields as radiology. And the AI paradox
Superminds, not substitutes 39

in ways that simultaneously create more value for its entire crew. A highly complex piece of
customers and more meaningful work for people. 52
machinery with many interlocking parts and
dependencies, the Challenger’s demise was due to
To do this well, it is best to start with first the failure of a single part—the O-ring. From an
principles. What is the ultimate goal of the job for economist’s perspective, the marginal utility of
which the smart technology is intended? Is the making this single part more resilient would have
purpose of a call center to process calls or to been virtually infinite. Autor states that by analogy:
help cultivate enthusiastic, high-lifetime-value
customers? Is the purpose of a radiologist to flag In much of the work that we do, we are the
problematic tumors, or to participate in the O-rings … As our tools improve, technology
curing, counseling, and comforting of a patient? magnifies our leverage and increases the
Is the purpose of a decision-maker to make importance of our expertise, judgment,
predictions, or to make wise and informed and creativity.53
judgments? Is the purpose of a store clerk to ring
up purchases, or to enhance customers’ shopping In discussing the logic of human-machine
experiences and help them make smart superminds, we do not mean to suggest that
purchases? Once such ultimate goals have been achieving them will be easy. To the contrary, such
articulated and possibly reframed, we can go forces as status quo bias, risk aversion, short-term
about the business of redesigning jobs in ways economic incentives, and organizational friction
that make the most of the new possibilities will have to be overcome. Still, the need to
afforded by human-machine superminds. overcome such challenges is common to many
forms of innovation.

Workers and leaders alike must A further challenge relates to the AI


remember that jobs can and should paradox: Organizations must learn

be creatively reimagined to make to better measure, manage, and


reward the intangible skills that
the most of new technologies in come naturally to humans but at
which machines flounder. Examples
ways that simultaneously create include empathy for a call center

more value for customers and more operator or caregiver; scientific


judgment for a data scientist;
meaningful work for people. common sense and alertness for a
taxi driver or factory worker; and so
An analogy from MIT labor economist David Autor on. Such characteristically human—and often
conveys the economic logic of why character- under-rewarded—skills will become more, not
istically human skills will remain valued in the less, important in the highly computerized
highly computerized workplaces of the future. In workplaces of the future.
1986, the space shuttle Challenger blew up, killing

www.deloitte.com/deloitte-review
40 FUTURE OF WORK

PAIRING HUMANS AND MACHINES TO FORM SUPERMINDS: THRIVING IN A CHANGING WORLD


By Joe Ucuzoglu, CEO, Deloitte US
The unique capabilities of humans matter now more than ever, even in the face of
rapid technological progress. In the C-suite and boardrooms, a range of complex
topics dominate the agenda: from understanding the practical implications of AI,
cloud, and all things digital, to questions of purpose, inclusion, shareholder primacy
versus stakeholder capitalism, trust in institutions, and rising populism—and now,
the challenges of a global pandemic. In all of these areas, organizations must
navigate an unprecedented pace of change while keeping human capabilities and
values front and center.

We know from recent years of technological advancement that machines are typically far better than
people at looking at huge data sets and making connections. But data is all about the past. What is being
created here in the Fourth Industrial Revolution—and in the era of COVID-19—is a future for which past
data can be an unreliable guide. Pablo Picasso once said, “Computers are useless. All they can do is
provide us with the answers.” The key is seeing the right questions, the new questions—and that’s where
humans excel.

What’s more, the importance of asking and answering innovative questions extends up and down entire
organizations. It’s not just for C-suites and boardrooms, as Jim Guszcza and Jeff Schwartz share in their
examples. It’s about effectively designing systems in which two kinds of intelligence, human and machine,
work together in complementary balance, forming superminds.

Embracing the concept of superminds and looking holistically at systems of human-machine


collaboration provides a way forward for executives. The question is, “What next?” The adjustments all of
us have had to make in light of COVID-19 show that we are capable of fast, massive shifts when required,
and innovating new ways of working with technology. Eventually, this pandemic will subside, but the
currents of digital transformation that have been accelerated out of necessity over the past few months
are likely to play out for the rest of our working lives.

How will your organizations become a master of rapid experimentation and learning, of developing
and rewarding essential human skills, and of aligning AI-augmented work with human potential and
aspirations?•
Superminds, not substitutes 41

The authors thank John Seely Brown, John Hagel, Margaret Levi, Tom Malone,
and Maggie Wooll for helpful conversations. We also thank Siri Anderson, Abha
Kishore Kulkarni, Susan Hogan, and Tim Murphy for invaluable research
assistance and support.

James Guszcza, Deloitte Services LP, is Deloitte’s US chief data scientist. He lives in Santa Monica,
California.

Jeff Schwartz, a principal with Deloitte Consulting LLP, is the US leader for the Future of Work and the
US leader of Deloitte Catalyst, Tel Aviv. He is based in New York City.

Read more on www.deloitte.com/insights


Talent and workforce effects in the age of AI
Over the past few years, artificial intelligence has matured into a collection of powerful technologies
that are delivering competitive advantage to businesses across industries. But will AI-driven automation
render most jobs obsolete, or will humans be working in collaboration with the technology?

Visit www.deloitte.com/insights/workforce-ai-adoption

www.deloitte.com/deloitte-review
Endnotes

Superminds, not substitutes rather than cost reduction, see: Jeff Schwartz
et al., “Reframing the future of work,” MIT Sloan
Management Review, February 20, 2019.
page 26
7. Thomas W. Malone, Superminds (New York:
1. Barbara J. Grosz, “Some reflections on Michael Little, Brown Spark, 2018). For an interview with
Jordan’s article ‘Artificial intelligence—the Malone, see: Jim Guszcza and Jeff Schwartz,
revolution hasn’t happened yet,’” Harvard Data “Superminds: How humans and machines can
Science Review, July 2, 2019. work together,” Deloitte Review 24, January 28,
2019. Malone’s and others’ work in the emerging
2. Shana Lynch, “Andrew Ng: Why AI is the new multidisciplinary field of collective intelligence is
electricity,” Insights by Standard Business, March surveyed in the Handbook of Collective Intelligence,
11, 2017. edited by Thomas W. Malone and Michael S.
Bernstein (Cambridge, Massachusetts: MIT Press,
3. A general-purpose technology (GPT) is a type of
2015). Superminds explores the newer forms
technology whose breadth of applications and
of collective intelligence that can emerge from
spillover effects can drastically alter entire econo-
groups of humans connected and otherwise
mies and social structures. Previous examples of
augmented by digital and AI technologies.
GPTs include the invention of writing, the steam
engine, the automobile, the mass production 8. Personal communication from Thomas Malone to
system, the computer, the internet, and, of Jim Guszcza.
course, electricity.
9. For example, the AI pioneer Marvin Minksy
4. Lynch, “Andrew Ng: Why AI is the new electricity.” served as an adviser to Stanley Kubrick’s and
Arthur C. Clarke’s 2001: A Space Odyssey. Perhaps
5. Kevin Roose, “The hidden automation agenda of
that movie’s most memorable character was HAL
the Davos elite,” New York Times, January 25, 2019.
9000, a computer that spoke fluent English, used
6. For a discussion on reimagining jobs and “recon- commonsense reasoning, experienced jealousy,
structing work,” see: Peter Evans-Greenwood, and tried to escape termination by doing away
Harvey Lewis, and Jim Guszcza, “Reconstructing with the ship’s crew. In short, HAL was a com-
work: Automation, artificial intelligence, and the puter that implemented a very general form of
essential role of humans,” Deloitte Review 21, July human intelligence. Minsky and other AI leaders
31, 2017; for more on the theme of redefining of the day believed that such general, human-im-
and redesigning work to create new sources itative artificial intelligences would be achievable
of value, see: John Hagel, Jeff Schwartz, and by the year 2001.
Maggie Wooll, “Redefining work for new value:
10. David Gelles and David Yaffe-Bellany, “Sharehold-
The next opportunity,” MIT Sloan Management
er value is no longer everything, top CEOs say,”
Review, December 3, 2019; for a discussion on
New York Times, August 19, 2019.
the business logic of focusing on value creation
Endnotes

11. Anne-Marie Slaughter, “Forget the Trump can truly learn and think in a human way.” See:
administration. America will save America,” New Howard Yu, “AlphaGo’s success shows the human
York Times, March 21, 2020. advantage is eroding fast,” New York Times, March
9, 2016.
12. It is commonly agreed that the field of AI
originated at a 1956 summer conference at 16. Gary Marcus and Earnest Davis, Rebooting AI (New
Dartmouth University, attended by such scientific York: Pantheon, 2019). For a relevant excerpt,
luminaries as John McCarthy, Claude Shannon, see: Gary Marcus and Earnest Davis, “If comput-
Alan Newell, Herbert Simon, and Marvin Minsky. ers are so smart, how come they can’t read?,”
The conference’s proposal stated: “The study is to Wired, September 10, 2019.
proceed on the basis of the conjecture that every
17. Steve Lohr, “Facial recognition is accurate, if
aspect of learning or any other feature of intelli-
you’re a white guy,” New York Times, February 9,
gence [emphasis added] can in principle be so
2018.
precisely described that a machine can be made
to simulate it. An attempt will be made to find 18. The linguist Noam Chomsky famously observed
how to make machines use language, form ab- that children can learn grammar based on
stractions and concepts, solve kinds of problems surprisingly little data, and argued that knowl-
now reserved for humans, and improve them- edge of grammar is innate. This is the so-called
selves.” See: John McCarthy et al., “A proposal “poverty of the stimulus” argument. For a
for the Dartmouth Summer Research Project on modern discussion that points toward potential
artificial intelligence,” AI Magazine 27, no. 4 (2006). developments in third-wave AI, see: Amy Perfors,
Regarding the time frame, the proposal went on Joshua Tennenbaum, and Terry Regier, “Poverty
to state, “We think that a significant advance can of the stimulus? A rational approach,” MIT.edu,
be made in one or more of these problems if a January 2006.
carefully selected group of scientists work on it
for a summer.” 19. In computer science, the phenomenon goes
by the name “Moravec’s Paradox” after Hans
13. Garry Kasparov, “The brain’s last stand,” keynote Moravec, who stated that “it is comparatively easy
address at DefCon 2017, Kasparov.com, August to make computers exhibit adult-level perfor-
18, 2017. mance on intelligence tests or playing checkers,
and difficult or impossible to give them the skills
14. YouTube, “Jeopardy IBM challenge day 3 Comput-
of a one-year-old when it comes to perception
er Overlords clip,” video, July 28, 2012.
and mobility.” See: Hans Moravec, Mind Children
15. Murad Ahmed, “Demis Hassabis, master of (Cambridge, Massachustetts: Harvard University
the new machine age,” Financial Times, March Press, 1988).
11, 2016. This was not an isolated statement.
20. Alison Gopnik, “What do you think about
Two days earlier, the New York Times carried an
machines that think?,” Edge, 2015. Gopnik also
opinion piece stating that “Google’s AlphaGo is
states, “One of the fascinating things about the
demonstrating for the first time that machines

www.deloitte.com/deloitte-review
Endnotes

search for AI is that it’s been so hard to predict 27. This analysis was done by the economist James
which parts would be easy or hard. At first, we Bessen. See: James Pethokoukis, “What the story
thought that the quintessential preoccupations of ATMs and bank tellers reveals about the ‘rise of
of the officially smart few, like playing chess or the robots’ and jobs,” blog, AEI, June 6, 2016.
proving theorems—the corridas of nerd machis-
28. Ironically, the report is entitled, “How bots will
mo—would prove to be hardest for computers.
automate call center jobs.” See Bloomberg, “How
In fact, they turn out to be easy. Things every
bots will automate call center jobs,” August 15,
dummy can do like recognizing objects or picking
2019.
them up are much harder.”
29. Ibid.
21. Steven Pinker, The Language Instinct (New York:
Harper Perennial Modern Classics, 1994). 30. Ziad Obermeyer and Ezekiel Emanuel, “Predicting
the future—big data, machine learning, and clini-
22. Scott E. Page, The Diversity Bonus: How Great
cal medicine,” New England Journal of Medicine 375
Teams Pay Off in the Knowledge Economy (Prince-
(2016): pp. 1,216–9, DOI: 10.1056/NEJMp1606181.
ton, New Jersey: Princeton University Press, 2017).
The authors comment that, because their work
23. Garry Kasparov, “The chess master and the largely involves interpreting digitized images,
computer,” New York Review of Books, February 11, “machine learning will displace much of the work
2010. of radiologists and anatomical pathologists.”

24. Malone provides a useful taxonomy for designing 31. Tweet by Eric Topol, Twitter, 7:35 AM, November
such human-machine partnerships. Computers 16, 2017.
can serve as tools for humans, assistants to hu-
32. Geff Hinton, “On radiology,” YouTube video,
mans, peers of humans, or managers of humans.
November 24, 2016.
(See: Malone, Superminds.) Malone also discusses
these modes of collaboration in “Superminds: 33. Eric Topol, Deep Medicine: How Artificial Intelligence
How humans and machines can work together,” Can Make Healthcare Human Again (New York:
Deloitte Review 24, January 28, 2019. Basic Books, 2019).

25. A 2016 research report found that 80 percent 34. Ibid.


of businesses were either using chatbots or
planned to adopt them by 2020. Business Insider 35. All quotes in the above paragraph are from Eric
Intelligence, “80% of businesses want chatbots by Topol, “Doctors and Patterns,” Deep Medicine: How
2020,” December 14, 2016. Artificial Intelligence Can Make Healthcare Human
Again (New York: Basic Books, 2019).
26. Kelley A. Wittbold et al., “How hospitals are using
AI to battle COVID-19,” Harvard Business Review, 36. James A.R. Marshall et al., “On optimal deci-
April 3, 2020. sion-making in brains and social insect colonies,”
Journal of the Royal Society 6 (2009): pp. 1,065–74,
DOI: 10.1098/rsif.2008.0511.
Endnotes

37. Bhavik N. Patel et al., “Human–machine and data scientists to create data features and
partnership with artificial intelligence for chest understand data limitations; the actuaries and
radiograph diagnosis,” Nature Digital Medicine 2 data scientists carefully sampled and adjusted the
(2019). Interestingly, the authors report that the historical data to mirror the scenarios in which
machine diagnoses had a higher true positive the algorithm was to be applied. In addition, they
rate (sensitivity), while the human diagnosticians collaborated with business owners to establish
had a lower false negative rate (specificity). “guardrails” and business rules for when and how
the algorithms should be used for various risks.
38. Herbert Simon, Administrative Behavior: A Study of
In contrast to the extreme form of machine learn-
Decision-Making Processes in Administrative Organi-
ing-based AI in which prior knowledge is thrown
zations, 4th ed. (New York: Free Press, 1997).
away in favor of algorithmic pattern-matching,
39. Daniel Kahneman et al., “Noise: How to overcome human knowledge was infused into the solution
the high, hidden cost of inconsistent decision at multiple steps. And in contrast with the
making,” Harvard Business Review, October 2016. substitution narrative, the solution was explicitly
designed to meet the needs of human end users.
40. For example: Paul McCaffrey, “Daniel Kahneman:
Four keys to better decision making,” Enterprising 44. For further discussion of this conception of
Investor, June 8, 2018; Heloise Wood, “William human-centered AI, see: Jim Guszcza, “Smarter
Collins splashes seven figures on Kahneman’s together: Why artificial intelligence needs
book about noise,” Bookseller, March 20, 2018. human-centered design,” Deloitte Review 22,
January 22, 2018.
41. For example: Miranda Bogen, “All the ways hiring
algorithms can introduce bias,” Harvard Business 45. For example: Matissa Hollister, “AI can help with
Review, May 6, 2019. the COVID-19 crisis—but the right human input is
key,” World Economic Forum, March 30, 2020. The
42. For example: Ziad Obermeyer et al., “Dissecting article notes: “AI systems need a lot of data, with
racial bias in an algorithm used to manage the relevant examples in that data, in order to find
health of populations,” Science 366, no. 6,464 these patterns. Machine learning also implicitly
(2019): pp. 447–53, DOI: 10.1126/science.aax2342. assumes that conditions today are the same as
the conditions represented in the training data.
43. Upon joining the practice, Jim’s reaction—similar
In other words, AI systems implicitly assume that
to that of many of our clients—channeled the
what has worked in the past will still work in the
substitution view: “This is impossible: The data is
future.”
way too sparse and messy to eliminate the need
for human underwriters!” In fact, the solution 46. For a broader discussion of AI ethics, see: Jim
was superminds, not substitution, in nature. To Guszcza et al., “Human values in the loop: Design
ensure effective human-algorithm collaboration, principles for ethical AI,” Deloitte Review 26,
expert human judgment was infused into January 28, 2020.
both the design and the daily operation of the
solution. Underwriters used their domain and 47. Chicago Booth Review, “Sendhil Mullainathan says
institutional knowledge to help the actuaries AI can counter human biases,” August 7, 2019.

www.deloitte.com/deloitte-review
Endnotes

48. A physical analog to this discussion of human-ma-


chine “cognitive collaboration” for improved
judgments and decisions is the concept of
“cobots” developed by the prominent roboticist
Rodney Brooks. Brooks and his collaborators
have designed robots that can be trained without
writing code, give human collaborators visual
cues (such as animated “eyes” that move in the
direction of a robot arm), and are designed to be
safe for humans to collaborate with. See: Erico
Guizzo and Evan Ackerman, “Rethink Robotics,
pioneer of collaborative robots, shuts down,” IEEE
Spectrum, October 4, 2018.

49. Slaughter, “Forget the Trump administration.


America will save America.”

50. Atul Gawande, “The hot spotters: Can we lower


medical costs by giving the neediest patients
better care?,” New Yorker, January 17, 2011.

51. For example, CareLinx is a startup that uses


algorithms to match caregivers with families.
See: Kerry Hannon, “Finding the right caregiver,
eHarmony style,” Money, July 11, 2016.

52. Hagel, Schwartz, and Wooll, “Redefining work for


new value: The next opportunity”; Schwartz et al.,
“Reframing the future of work.”

53. David H. Autor, “Why are there still so many jobs?


The history and future of workplace automation,”
Journal of Economic Perspectives 29, no. 3 (2015):
pp. 3–30.

You might also like