The New Division of Labor: On Our Evolving Relationship With Technology
The New Division of Labor: On Our Evolving Relationship With Technology
HUMAN CAPITAL
Our services leverage design-thinking, digital innovation, and data-driven industry insights to help
you navigate today’s business challenges. A new wave of human resource, talent, and organization
priorities require HR teams to innovative talent, leadership, and change programs. For more
information, visit us on https://www2.deloitte.com/au/en/pages/human-capital/topics/human-
capital.html?icid=top_human-capital
On our evolving relationship with technology
Contents
A matter of trust | 2
Conclusion | 13
Endnotes | 14
1
The new division of labor
A matter of trust
W
HEN THE COMPUTERIZED word sidered an important skill to be taught to children
processor supplanted the typewriter, in school, while executives no longer see typing as
responsibility for producing day-to-day beneath their role.
business correspondence shifted from typist to Without these changes in social norms, and
manager. This made document production much without the shift in attitude from anxiety to accep-
more efficient as it eliminated the round trip from tance, word processors may have remained on the
Dictaphone1 through typing pool and back to the fringes of society, or at least not have been used to
author for approval. This increase in efficiency didn’t their full potential. We might have simply replaced
occur overnight though. Word processors may have the typing pool’s typewriters with word processors,
promised to make managers more productive, but preserving the old, inefficient Dictaphone-to-typist-
many initially balked at using them because it didn’t to-author process for producing correspondence.
suit their personal work habits, nor did they see it as Instead, we changed our entire approach to corre-
a task fit for their station. spondence production—and were able to take better
Crafting their own documents meant that advantage of the new technology as a result.
managers needed to acquire composition and Putting technology to effective use isn’t only
proofreading skills, as well as the more technical about recognizing the superiority of a new tool, or
skills (such as typing and functional digital lit- even just about learning how to use it. It’s also a
eracy) required to operate a word processor. But matter of emotional acceptance and social valida-
perhaps the most fundamental change they had tion—factors that are at least as important as the
to make was in their mindsets. To effectively use intellectual understanding that the new technology
a word processor, they had to build a relationship is “better.” This is true for much more than word
with this new technology, trusting that colorful processors. From shipping containers3 to smart-
squiggly underlines actually indicated errors, that phones,4 both work habits and social norms had
saved documents could be retrieved at will, and to change before the core technology could have a
that the machine wouldn’t crash (well, not too often transformative impact.
anyway). Building this trust—integrating
the new tool into their personal work
habits—took time, as people were natu- Putting technology to effective
rally loath to abandon old attitudes and
behaviors.2 use isn’t only about recognizing
Most managers did come to embrace
word processors, with the result that
the superiority of a new tool, or
these types of machine not only became even just about learning how to
widespread, but fundamentally changed
the way business correspondence is pro- use it. It’s a matter of emotional
duced. Along with this change came a
shift in social norms. Typing is now con- acceptance and social validation.
2
On our evolving relationship with technology
We are now facing the same situation with imagine it being any other way. Because of this, the
another potentially transformative technology: arti- only roles we can see for ourselves with respect to
ficial intelligence (AI). While AI-based technologies AI are that of either the “users,” the masters, or the
are rapidly gaining ground in both the workplace “used,” the slaves—and with the technology seem-
and society at large, it can be argued that they ingly about to gain a mind of its own, it’s anything
are overwhelmingly being used in a suboptimal but certain that we’ll wind up on top.
fashion—to automate tasks that have traditionally But AI is different from previous generations of
been performed by humans, who then are consid- tools, and not because it may be poised to develop
ered redundant and eliminated. In a previous essay, sentience and take over the world. The difference
we argued that this is unnecessarily limiting AI’s po- is that our relationship with AI-enabled tools has
tential utility.5 There’s a growing body of evidence the potential to evolve beyond being an instrument
that solutions created collaboratively by humans for particular tasks—and that, ironically, is because
and machines are different from, and superior to, of AI’s very ability to mimic having “a mind of its
solutions created by either humans or machines own.”8 Because AI embeds reasoning, often quite
individually.6 If this is true, then it follows that complex reasoning, and the ability to act, in a
humans, so far from being replaceable, are essential tool, the tool is no longer just a passive instrument
partners in realizing AI’s optimal value.7 serving our needs. Rather than simply using such
Could our current social norms and attitudes be a tool, we’re now interacting with it, allowing it to
responsible for our failure to experiment with using make and act on decisions on our behalf, consid-
AI-based tools collaboratively to define and solve ering its responses, and changing our intentions as
problems together? It’s at least possible. Certainly, a result.
there’s no shortage of anxiety about technology in It’s possible that if we persist in approaching
general and AI in particular. Even apart from the AI-based tools as mere instruments rather than as
existential fear that robots will take our jobs, many actors with (albeit limited) autonomy and agency
people are concerned that the digital technology in their own right, we will fail to realize AI’s full
pervading society has (or will soon) become autono- value—the equivalent of replacing typewriters with
mous, emancipated, and independent from us, its word processors while continuing to send corre-
parents. Some believe that technology is the prime spondence through the typing pool. On the other
factor shaping society and our lives, and that it is hand, if we pursue a new, different relationship
leading us in undesirable directions. Nor is it clear with AI—one that capitalizes on the technology’s
how we might trust the technology that’s seen as a decision-making capabilities—we can envision a
contributor to “fake news,” among other problems. new division of labor between humans and intel-
In some people’s opinion, the development of AI is ligent machines that takes better advantage of the
the endgame for humans, the final nail in the coffin strengths of each. It is through this process of con-
of a species that has unwittingly ceded control of its structing our new relationship that we will come to
fate to its mechanical creations. develop the ethics, the social norms, that constrain
The root of these fears may lie in the instrumental the agency of both humans and machines, deter-
relationship that we typically have with technology. mining what machines can do, what humans can
We conceive of ourselves as tool “users,” the active direct the machines to do, and what is beyond the
agents, while our tools are relegated to the status pale. What this new relationship with AI could look
of the “used.” We’re so accustomed to framing of like, and what it might imply for society’s view of
our relationship with tools according to this instru- how work is valued and rewarded, is the subject of
mental “user/used” dichotomy that it’s difficult to the rest of this essay.
3
The new division of labor
Integrating human
and machine
P
RIOR TO THE industrial revolution, we change in the way we relate to our now-intelligent
had a simple, mechanical relationship with tools.
technology. Work implied craft and the AI enables us to codify decisions via algorithms:
craftsperson. Indeed, the final task of a journeyman Which chess piece should be moved, what prod-
(a craftsperson on the journey between apprentice ucts are best to populate this investment portfolio?
and master) before graduating was to construct These decisions are made in response to a changing
a masterwork demonstrating their proficiency environment—our chess opponent’s move, a change
in all aspects of their craft. Technology was little in an individual’s circumstances. In fact, it’s the
more than technique, with the craftsperson typi- environmental change that prompts the action.
cally responsible for making and maintaining their This responsiveness to external stimuli is why it is
own tools. often more natural to think of AI as mechanizing
This relationship bifurcated in the industrial behaviors rather than tasks.10 “Task” implies a piece
era, separating technology from technique as the of work to be done regardless of the surrounding
technology and the tools that represented it became context; a behavior is a response to the world
more complicated. Some workers specialized in changing around us, something we do in response
technology—creating tools—while others focused on to appropriate stimulus, with the same stimulus in
using them—technique. Each of these relationships different contexts triggering different behaviors.
require their own particular knowledge
and skills. A weaver might operate a power
loom, to pick one example, but typically In both the preindustrial and
industrial eras, decision-making
would not know how to construct one. Cloth
made by a mechanist, conversely, would
be of poor quality no matter how detailed
their knowledge of a loom’s inner workings.
authority remained with the
This divergence in roles has accelerated human. AI has the potential to
as technology has become increasingly
complicated, resulting in tools such as change that, and in doing so, to
computers or even the modern motor car
that most of us use but few understand.9
drive another step change in
In both the preindustrial and indus-
trial eras, however, the tools in question,
the way we relate to our now-
whether a craftsperson’s chisel or a intelligent tools.
machinist’s lathe, remained objects. All
decision-making authority remained with the Solutions that contain a set of automated be-
human, whether tool maker or tool user; the person haviors have a degree of autonomy and agency, as
was always the active agent. AI has the potential to they will react to some changes in the environment
change that, and in doing so, to drive another step around them (constrained by what their set of
4
On our evolving relationship with technology
behaviors enables) without an operator’s interven- First, we can expect humans to work on ma-
tion. This autonomy and agency might be relatively chines, similar to how a manager might teach a
benign, such as a face-recognition behavior auto- worker how to perform a task. This might be (as is
matically tagging new holiday snapshots with the often assumed) via “coding,” such as when a devel-
names of family members that it identifies, and oper takes engineering knowledge and programs it
getting a few wrong. Or it might be much more con- into a computer-aided design tool.11 Knowledge can
sequential, like a military drone that also has the also be captured by example. A “truck driver” can
authority to apply lethal force. teach an autonomous truck how to park in a par-
In any case, just as we sense and respond to the ticular loading bay by manually guiding it in the first
environment changing around us, so will the intelli- time. A robot chef learns to cook a meal by observing,
gent digital systems we have created. We will weave and then copying, the actions of a human.12 Or our
the digital behaviors of these systems with our own digital agent might be taught by preference. This is
human behaviors, reacting to changes they make in an iterative process where the agent responds to a
the world at the same time as they react to changes range of stimuli, with a human trainer selecting the
that we make. It will be as if these solutions exist responses that best match what the agent needs to
along with us in the organization chart, their au- achieve.13 Over time the agent will infer what the
tonomy and agency affecting our own. Rather than most appropriate response is.
“digital solutions,” they can more prop-
erly be thought of as “digital agents.”
As we interact with digital agents to Just as we sense and respond to
the environment changing around
get work done—discovering different
ways to effectively combine their ca-
pabilities with ours—our relationship
with technology will be primed to
us, so will the intelligent digital
evolve again to account for the sharing systems we have created.
of authority between worker and tool.
This, we posit, will add a new, tripar-
tite layer on top of the “maker/user” dichotomy that Second, humans will find themselves working
arose in the industrial era: with machines, when digital behaviors are used to
FIGURE 1
5
The new division of labor
complement our own human behaviors. An obvious we also need to consider that the relationship won’t
example is a tumor-identification behavior used be one-sided. That is, we won’t just work on, with,
to augment an oncologist’s ability to diagnose skin or for digital agents; digital agents will also work on,
cancer by helping them locate potential tumors.14 with, or for us—or the relationship may be roughly
In this category are automated behaviors that help equal and collaborative. (We might say that a semi-
us observe the environment around us. Machine autonomous car is assisting the human driver, for
vision might detect driver fatigue or distraction, example, but we can just as well consider the human
either via image recognition or by observing brain to be assisting the semiautonomous car.)17
wave activity. We can bolster our
decisions by using digital behav-
iors that enumerate the available Machines may delegate actions that
they are unable to perform to a
options, helping us weigh them
objectively, and suggest the most
likely course of action: anything
from which movie to watch next to
human.
how best to treat a medical condi-
tion. Or digital behaviors can send the results of our Consequently, we can take our model for post-
decisions back out into the world. This might be as industrial work and make it two-dimensional (figure
simple as setting a price, or as complex as steering 2). The horizontal axis has columns for each of the
an autonomous car or crafting a news story.15 working for, with, and on roles that we’ve already
Third, we will work for machines, when our ac- discussed, and represents the different ways au-
tivities are directed by a machine and the quality of thority is divided between human and machine. The
our work gauged by them. Machines may delegate new vertical axis represents the division of agency,
actions that they are unable to perform to a human. and is also broken into three.
A human can drive a car (at least, until the cars can The top row of figure 2 captures how our rela-
drive themselves) to deliver a package or person tionship with technology is changing, where the
under the direction of a machine, as with many machines have the agency while ours is limited to
ridesharing services.16 Rather than piece work, a supporting them. The bottom row of figure 2 is the
machine might specify a sequence of tasks, creating reverse, capturing how technology’s relationship
a schedule for a human worker that optimizes their with us is changing, where we have the agency and
time, and then tracks progress. In any case, the work the machines’ agency is limited to supporting us. In
is specified, and quality measured by the machine, the middle we have human–machine collaboration,
without negotiation with the human. where the two work as equals.
Further, if we’re to understand our possible
future relationship with intelligent technology, then
6
On our evolving relationship with technology
FIGURE 2
MACHINE LEADS
the machine assigning work stepping in when the engineer formalizes best
and assessing the quality of machine is out of its depth practice for the design and
the product. For example, and about to make a certification for a home
ridesharing driver, mistake. For example, a design tool, a “truck driver”
pick-n-pack in a distribution “driver” monitoring an teaches autonomous trucks
centre. autonomous vehicle, or an to park at a new distribution
auditor monitoring a centre, or teaches a smart
sentencing algorithm to bias home to maintain a
and compliance with the preferred temperature.
law.
for a human, where the human’s work, stepping in learn new tricks. For
human assigns work and to highlight potential issues example, a Fitbit prompts its
HUMAN LEADS
assesses the result. For and problems. For example, owner to help them reach
example, a trader managing a “sentencing computer” their wellbeing goals, or a
a team of trading bots that monitors a judge’s self-paced learning solution
control investment deliberations to help the tailors a massive open
portfolios, setting, and judge avoid bias in online course (MOOC) to an
updating the bots objectives decisions. individual student’s abilities
and intermittently and progression through
monitoring their progress. the course.
7
The new division of labor
The limitations of AI
A
LL NINE MODES of the relationship we’ve be mechanized and then automated. For Alpha Go,
described in the previous section represent Google’s champion Go computer, this was done
a mix of human and machine behaviors, with a Markov Decision Process (a mathematical
whether the human or the machine takes the lead framework for managing decisions when outcomes
in working on, with, or for each other. That doesn’t are partly under the control of the decision-
mean that the two parties are interchangeable; AI maker).18 Once mechanized, we can apply the magic
has its own inherent limitations, meaning we expect of digital computing to give the algorithm a life of
a difference in the kind of behaviors that humans its own—to turn it into a digital agent. However, if
and machines contribute. Humans “think” whereas our digital agent is to be reliable, then we need to
machines crunch numbers. That difference means carefully control the context (the data inputs, any
that humans, with their ability to adapt to an almost outputs, and the time and resources allowed) it op-
infinite range of contexts, can behave in ways that erates in. Even small disturbances to this context
create new knowledge—forming new goals, defining can result is dramatically different, and potentially
new problems, and framing new contexts in which incorrect, answers.
to act. AI-enabled machines, on the
other hand, apply knowledge and
depend on the context or contexts AI has its own inherent limitations,
meaning we expect a difference in
provided them if they are to behave
in the expected fashion. Their
contribution is to embody existing
knowledge, evincing it in behaviors
the kind of behaviors that humans
that help to achieve a defined set and machines contribute.
of goals in a specified range of con-
texts—but they are unable to form
new goals or redefine the context in which they For instance, a complex solution such as an
are pursued as they are constrained to viewing the autonomous car contains many mechanized behav-
world in the way that their creators have framed it. iors: lane following, obstacle avoidance, separation
To understand why, consider that, before a maintenance, and so on. Despite their apparent
task can be automated, it must be mechanized, the sophistication, however, these solutions contain
actions required to prosecute the task realized in relatively few behaviors compared to a human, and
gears, pulleys, levers, and pistons. To be mecha- each behavior is comparatively unsophisticated and
nized, a task must be precisely defined and the narrowly defined. An autonomous car cannot use
context within which the mechanism will operate eye contact with other drivers, for example, to de-
carefully controlled. Mechanized tasks might be termine who will take the right of way at a confusing
precise but they are also inflexible, requiring every- intersection. Consequently, the current generation
thing to be just so. of (semi) autonomous cars can only operate in
Artificial intelligence has a similar dynamic. tightly constrained environments, and they require
Just as tasks can be mechanized and automated, humans to augment them when their limited set of
algorithms that give rise to behaviors can similarly behaviors fails.
8
On our evolving relationship with technology
This is why mechanizing and then automating will only reason in the way it’s designed. Its agency is
algorithmically determined behaviors will not result similarly limited as it can only see the world in terms
in an “AI,” an artificial being, or “AGI,” an artificial of how its creators framed it, can only choose among
general intelligence, that is capable of creating new the behaviors configured into it, and can only affect
knowledge.19 The set of behaviors is too limited, in- the world (have the agency) in ways that we allow it
dividual behaviors are too narrowly defined, and the to. And the solution can only be assumed to be oper-
digital agent too inept at acquiring new behaviors ating correctly if the context it operates in is exactly
without human intervention for the machine to be the same as the one its behaviors were designed for
considered intelligent. The autonomy exhibited by (such as on a freeway where vehicles don’t stop un-
an AI algorithm is limited, as the algorithm will only expectedly). Otherwise it will fail.
consider the data that it is configured to consider, and
9
The new division of labor
I
F WORK IS divided along the lines of knowledge happy retirement is for a particular individual. At
creation versus knowledge application—with best they’ll have a vague idea—playing golf, trav-
humans responsible for the former and digital eling, or investing time in a neglected hobby, are
agents for the latter—the role that (human) workers all common scenarios. First, the individual needs
play in production will change, as well as the re- to determine that what they think will make them
lationships between a firm and its customers and happy will actually make them happy. Next, they
between the firm and its workers. need to establish reasonable expectations, based
Let’s first consider the change in workers’ role on their anticipated savings and future earnings.
in production. In the past, production was divided Then they need to consider how they might change
into a series of specialized tasks, with humans their attitudes and behaviors today—taking lunch
responsible for complex tasks, and simple tasks del- to work rather than buying it—to change their re-
egated to machines. Human and machine worked
20
tirement. Finally, once they know what will actually
in sequence. In the future, if we choose to relate to make them happy, have reasonable expectations,
AI in the manner described above, work will be allo- and adopt suitable attitudes and behaviors, they will
cated to humans and machines, not along the lines know their income streams, timelines, and appetite
of complex vs. simple, but according to whether the for risk. At this point it’s easiest to just press the
work requires knowledge to be created vs. knowl- ‘robo-invest’ button.
edge to be applied. Humans will make sense of In this example we can see a new division of
the world and machines will plan and then deliver labor. The human works with the client to discover
suitable solutions, with production built around dis- what the client’s “happy retirement” might be, to
covering, framing, and solving problems.21 Human frame and define the problem, as it were. They’re
and machine work together. responsible for managing the interaction between
what’s desired (the client’s dreams)
and what’s possible (financial
In the future, work will be constraints and opportunities)
to discover what’s practical (the
allocated to humans and machines “happy retirement”), engaging in
10
On our evolving relationship with technology
last essay)23 won’t spend their time driving buses value for the firm is in the tasks that match client
(after all, the buses can drive themselves), but will funds to investment products—activities that the
spend time managing disruptions, supporting riders, client places very little value in. The firm’s desire to
and optimizing the operation of the bus network. measure the number of clients served by an adviser,
Factory workers might have roles within the factory, and the funds invested in each instance, is also dis-
but the value they provide is in discovering ways connected from when the client derives the most
to improve the factory’s operation, something a value from the relationship.
highly automated factory cannot do
on its own.24 Sales staff will work
with clients to understand how the In this envisioned division of labor
between human and intelligent
store can help them with whatever
problem brought the client across the
store’s threshold (whether or not that
threshold is digital or physical) and
machine, the bulk of the value
build their relationship with the store, is created in the discovery and
rather than operate the till and check
prices. And so on … framing of the problem, as
In this envisioned division of
labor between human and intelligent
planning and execution are
machine, the bulk of the value is
created in the discovery and framing
automated and commoditized.
of the problem, as planning and ex-
ecution are automated and commoditized. But just Ideally, from the client’s point of view, the firm
because society could divide work between humans would approach the client at the start of their career
and machines in this way doesn’t mean that such when they are forced to first consider saving for their
a state of affairs is inevitable. On the contrary, retirement. While a young person’s understanding
achieving this new division of labor will require of what a “happy retirement” might look like may
profound change from most businesses, because be vague, it’s important for them to develop the at-
in today’s society and economy, the work that titudes and behaviors that will set them on the right
businesses define as valuable—and pay people to path as the magic of compound interest and share
perform—is not necessarily that of discovering and market returns needs time to work. However, an
framing the problem. investment firm can expect to derive little revenue
In the retirement adviser example, the client from a client at this time in their lives. The metrics
receives the most value from the Socratic dialogue it uses to measure financial advisers—investments
in which they discover what their happy retire- made, funds under management—instead drive the
ment might be, and which helps them develop the advisers to approach individuals near the peak of
attitudes and behaviors that will get them there. their earning capacity (in their late 40s and early
From the client’s point of view, the mechanical 50s, once the kids are out of home), as these individ-
process of populating an investment portfolio has uals have the most money to invest. Unfortunately,
relatively little value, as it represents commoditized this is often too late, as their investment plans have
knowledge and skills. Investment firms, however, already been made, or they’ve established a lifestyle
frame their relationship with the client in terms which makes it difficult to free up funds to invest.
of a process that takes client details (investment Nor can they invest for the long term.
goal, appetite for risk, income streams) and creates For the firm to derive as much value as pos-
and maintains an investment portfolio to suit. The sible from the relationship, it needs to establish
11
The new division of labor
the relationship early, at the start of the career, made. This approach can also create more value
investing the time and the effort to instill the at- for customers, changing the relationship between
titudes and behaviors that will carry the client firm and customer. However, if we don’t value and
into retirement. The relationship also needs to be reward humans for doing the kind of work that
maintained, as the client’s understanding of what comes with an optimal human–AI relationship—if
their happy retirement might be evolves, or their our social and economic systems persist in framing
life circumstances change. This is where value for work in terms of tasks completed, and to value labor
the client is being created—and where early invest- in terms of its ability to prosecute these tasks—then
ment in the relationship will eventually pay off for we can expect AI solutions to continue to be used
the firm—yet investment firms typically do not as they often are today: as cost-cutting enablers,
reward advisers well for this kind of activity. This substitutes for humans instead of partners with
disconnect between where and how value is created humans. If we don’t ensure that our digital agents’
for clients, and where value is realized by the firm, behaviors exhibit our own core values and combine
might be a contributing factor to the breakdown effectively with those of our employees, or if we un-
in trust between financial institutions and their necessarily constrain their behaviors so that they
clients. cannot adapt to the evolving context the firm finds
It’s our view that humans and intelligent ma- itself in, then we won’t be able to realize their po-
chines work together optimally when humans tential or that of our employees. And as long as this
create knowledge, machines apply knowledge, and remains the case, we can expect the potential value
then humans explain the decisions that have been in this new generation of solutions to go unrealized.
12
On our evolving relationship with technology
Conclusion
A
T THE DAWN of the industrial revolution, digital system supports you, or the work might be a
Benjamin Franklin observed that “man is a collaborative effort.
tool-making animal.”25 Homo
faber, the tool-making human, seemed
to be supplanting homo sapiens, the Today, the development of digital
systems—systems that embody
thinking human. Some 200 years later
we are increasingly asking ourselves
if we have successfully managed our
tools, or if they are managing us. Max
automated reasoning, with a
Frisch, writing in the aftermath of the (limited) degree of agency—is
Second World War, felt that the later
was true, and we were being unmade forcing our relationship with
by our own technological ability.26 This
is over simplifying our relationship
technology to evolve again.
with technology though, a relationship
that dates back at least to the invention of cooking.27 We will have a much richer and more nuanced
We have evolved with our technology and our tech- relationship with digital systems, our so-called
nology has evolved with us, as has the relationship “digital agents,” than we did with the tools of the in-
between us and our tools. Each shapes the other. dustrial era. Digital agents, like all technology, are
Before our current instrumental relationship, more precise and relentless than humans making
with us playing the role of tool maker and tool user, them in many circumstances, more reliable. They
we had a more craft-based relationship that didn’t are limited, however, as they cannot make sense
make such a strong distinction between the making of the world like a human can, and are unable to
and using of tools. Today, the development of digital step outside the algorithmic box they live in. Digital
systems—systems that embody automated rea- agents can learn to be more precise and efficient
soning, with a (limited) degree of agency—is forcing over time, but they can’t learn how to do something
our relationship with technology to evolve again. differently. Humans, on the other hand, notice the
It’s often written that humans will be replaced by new or unusual and engage in the collaborative
ever more capable machines, or that machines will processes that results in new insights and knowl-
augment humans. But both these points of view are edge. Our new relationship with technology will be
limited, as they’re rooted in the idea that we are founded on this difference.
either tool user or tool maker. In the future we can However, successfully adopting the next gen-
expect to have a more active relationship with tech- eration of digital tools, autonomous tools to which
nology. Our tools are no longer simply our tools; we delegate decisions and that have a limited
they are taking on a life of their own. A digital agent form of agency, requires us to acknowledge this
could easily be your boss, coworker, and subordi- new relationship. At the individual level, forming
nate all at the same time, or it might take the lead a productive relationship with these new digital
with you supporting it; you might lead while the tools requires us to adopt new habits, attitudes,
13
The new division of labor
and behaviors that enable us to make the most of new knowledge. Only if firms recognize this shift in
these tools. At the enterprise level, the firm must how value is created, if they are willing to value em-
also acknowledge this shift, and adopt new defini- ployees for their ability to make sense of the world,
tions of value that allow it to reward workers for will AI adoption deliver the value they promise.
contributing to the uniquely human ability to create
Endnotes
1. Dictaphone was a brand of dictation machine that can trace its heritage back to the Volta Laboratory established
by Alexander Graham Bell in Washington, D.C. in 1881. The name was trademarked in 1907, and spun out
as a separate company in 1923. Since then “Dictaphone” has become a generic term for referring to any
dictation machine.
2. Sometimes this resistance to adopting new work habits can leave one far back on the technology adoption curve,
such as George R.R. Martin who uses a 1970s word processor (WordStar 4.0) to write Game of Thrones. See:
Neha Prakash, “George R.R. Martin writes ‘Game of Thrones’ on a ’70s word processor,” Mashable, May 15, 2014.
3. When we think of shipping containers we typically picture the 20- or 40-foot long ISO 6346 intermodal container
that became a common sight with the development of the world-wide intermodal container network in the
1950s and 1960s. The intermodal shipping container, as a technology, however, dates back at least to the early
1800s, and was first standardized in the 1933. It wasn’t until Malcom McLean, a trucking magnate, developed
the current containerization system, requiring the development of new docks and loading systems, and changes
in labor relations, that the intermodal container became a common sight. See: Marc Levinson, The Box: How the
Shipping Container Made the World Smaller and the World Economy Bigger (Princeton University Press, 2016).
4. The iconic smartphone, the exemplar of all modern smartphones, is the iPhone 3G, released in 2008. The
smartphone category, though, dates back to the early 1990s, with smartphones prior to the iPhone 3G only seeing
limited adoption. With widespread adoption came the need to develop social norms governing smartphone use.
Initially, for example, it was common for owners to use their smartphones whenever they wanted, though over
time social norms emerged limiting their use, such as the norm of setting one’s smartphone aside during a
dinner conversation.
14
On our evolving relationship with technology
5. Peter Evans-Greenwood, Alan Marshall, and Matthew Ambrose, Reconstructing jobs: Creating good jobs in the age
of artificial intelligence, Deloitte Insights, July 8, 2018.
6. Garry Kasparov, “The chess master and the computer,” New York Review of Books, February 11, 2010.
7. Peter Evans-Greenwood, Harvey Lewis, and Jim Guszcza, “Reconstructing work: Automation, artificial intelligence,
and the essential role of humans,” Deloitte Review 21, July 31, 2017.
8. Machines that seem to have a mind of their own are quite an old phenomenon. Alan Turing developed in 1950
what came to be known as the Turing Test, a test to see whether or not a machine could exhibit intelligent
behavior equivalent to, or indistinguishable from, that of a human. The test was defeated in the 1960s by ELIZA
which used a collection of rules triggered off by keywords to generate seemingly intelligent responses.
9. It’s not that our old craft-based relationship with technology was supplanted by these new industrial relationships,
of course. Both exist simultaneously—but the industrial relationship has come to dominate.
10. One of the central points of our previous essay, “Reconstructing work,” was that work enabled by AI should be
constructed around problems and behaviors, rather than products and tasks. See: Evans-Greenwood, Lewis, and
Guszcza, “Reconstructing work.”
11. Alpha Zero is even an example of this, though it’s often claimed that the solution taught itself to play chess. In
reality, the rules of chess were encoded into the features that feed the neural networks and Monte Carlo tree
search at the heart of Alpha Zero, and then two instances were played off against each other to explore these
rules and find optimal moves.
12. Moley Robotics has developed a robotic kitchen featuring a fully functional robot integrated into a professional
kitchen, which can be taught by example.
13. Paul Christiano et al., “Deep reinforcement learning from human preferences,” Cornell University, January 12,
2017.
14. We might call this the Where’s Wally problem. While humans might be better at classifying potential tumors (as
machines have a higher false-positive rate than humans), we are not as good at finding potential tumors in an
image as machines. We believe that we see the entire image when, really, we only see a small area where our
attention is focused as it meanders over the image, with our mind filling in the rest of the details to create the
illusion of seeing the entire image. This is why it’s difficult to find Wally in an image filled with similar likenesses,
as we only see Wally if we happen to look directly at him. AI, on the other hand, methodically scans the entire
image. It might not be as accurate at identifying something as complex as a tumor, but it has a much better
chance of finding it.
15. Tim Simonite, “Robot Writing Moves from Journalism to Wall Street,” MIT Technology Review, accessed February
19, 2018.
16. The main distinction between a taxi dispatch service and a ridesharing service is that a dispatch service will
announce jobs while a ridesharing service assigns jobs, and if you refuse too many jobs you’re banned from the
service. With a dispatch service, your assignments are your own concern, whereas with ridesharing the computer
effectively controls them. Ridesharing services also use ratings to measure performance, where the computer
collects the ratings and computes the result by using a somewhat opaque algorithm.
17. Alex Roy, “The half-life of danger: The truth behind the Tesla Model X crash,” The Drive, April 16, 2018.
18. It’s worth noting that the Markov Decision Process (MDP) was invented in the late 1950s, and was initially used
in process engineering, and is otherwise unrelated to AI. The AI community first became interested in the MDP
in the early 2000s.
19. An artificial general intelligence (AGI) is a machine with sufficient intelligence to successfully perform any
intellectual task that a human being can.
15
The new division of labor
20. We note that the difference between complex and simple tasks is a somewhat arbitrary definition, as what was
complex in the past might well be manageable today and simple in the near future.
23. Ibid.
24. Craig Trudell, Yuki Hagiwara, and Ma Jie, “‘Gods’ edging out robots at Toyota facility,” Japan Times, April 7, 2014.
25. As quoted in James Boswell, Life of Samuel Johnson (1791) (entry for April 7, 1778).
26. Max Frisch, Homo Faber: A report (San Diego: Harcourt Brace Jovanovich, 1994).
27. Claude Lévi-Strauss argued that our definition of ourselves as not being animals is consistently related to the
techniques and technologies of food preparation. See Claude Lévi-Strauss, Le cru et le cuit (Paris: Plon, 1964).
16
On our evolving relationship with technology
PETER EVANS-GREENWOOD is currently a fellow at The Deloitte Australia Center for the Edge—helping
organizations embrace the digital revolution through understanding and applying what is happening
on the edge of business and society. Evans-Greenwood has spent 20 years working at the intersection
between business and technology. These days he works as a consultant and strategic adviser on both
business and technology sides of the fence. He is based in Melbourne.
ROBERT HILLARD is Deloitte’s chief strategy & innovation officer and is responsible for positioning
the firm to tackle the disruption of technology, new competitors, challenging economic conditions,
and changing regulatory priorities. Throughout his career, he has retained his focus on technology
issues, and is passionate about shaping the firm’s response to the huge changes to Australia’s society
and economy as a result of technology, digital disruption, and the shift to an information economy.
He is particularly known for his work in business transformation, digital disruption, and information
management through governance or board roles with professional and industry associations, his books,
media, and client work. Hillard is based in Melbourne.
ALAN MARSHALL is the national lead partner in Deloitte’s Analytics and Cognitive practice in Australia,
working with clients to establish cognitive and analytic platforms that augment human decision-making
and drive automation. Marshall is a thought leader in the application of automation and artificial
intelligence in an organizational context. He is also well known as a specialist in developing strategies
and solutions for clients that utilize data assets to improve yield on decision-making or productivity in
process execution. His experience is wide ranging and includes driving outcomes for clients across the
oil & gas, mining, utilities, and higher education sectors. He is based in Perth.
Contacts
Alan Marshall David Brown
National lead, Analytics and Cognitive National lead, Human Capital
Partner Partner
Deloitte Consulting Pty Ltd Deloitte Consulting Pty Ltd
+61 8 9365 8139 +61 2 9322 7481
[email protected] [email protected]
17
Sign up for Deloitte Insights updates at www.deloitte.com/insights.
Follow @DeloitteInsight
About Deloitte
Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its
network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent
entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. In the United States, Deloitte refers to
one or more of the US member firms of DTTL, their related entities that operate using the “Deloitte” name in the United States
and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public
accounting. Please see www.deloitte.com/about to learn more about our global network of member firms.