0% found this document useful (0 votes)
9 views59 pages

Ethical and Societal Implications of Data and AI Report Nuffield Foundat

This report presents a roadmap for addressing the ethical and societal implications of algorithms, data, and artificial intelligence (ADA). It emphasizes the need for interdisciplinary research to clarify concepts, resolve tensions between values, and build a robust evidence base regarding the impacts of ADA technologies. The authors identify key areas for future research to ensure that these technologies benefit society while mitigating potential harms.

Uploaded by

Muhammad Saqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views59 pages

Ethical and Societal Implications of Data and AI Report Nuffield Foundat

This report presents a roadmap for addressing the ethical and societal implications of algorithms, data, and artificial intelligence (ADA). It emphasizes the need for interdisciplinary research to clarify concepts, resolve tensions between values, and build a robust evidence base regarding the impacts of ADA technologies. The authors identify key areas for future research to ensure that these technologies benefit society while mitigating potential harms.

Uploaded by

Muhammad Saqi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Ethical and societal

implications of algorithms,
data, and artificial intelligence:
a roadmap for research

Jess Whittlestone, Rune Nyrup, Anna Alexandrova,


Kanta Dihal and Stephen Cave
B

Authors: Jess Whittlestone, Rune Nyrup, Anna Alexandrova, About the Nuffield Foundation
Kanta Dihal, Stephen Cave, Leverhulme Centre for the Future
of Intelligence, University of Cambridge. The Nuffield Foundation funds research, analysis,
and student programmes that advance educational
With research assistance from: Ezinne Nwankwo, opportunity and social well-being across the
José Hernandez-Orallo, Karina Vold, Charlotte Stix. United Kingdom. The research we fund aims
to improve the design and operation of social
Citation: Whittlestone, J. Nyrup, R. Alexandrova, A. policy, particularly in Education, Welfare, and Justice.
Dihal, K. Cave, S. (2019) Ethical and societal implications Our student programmes provide opportunities for
of algorithms, data, and artificial intelligence: a roadmap young people, particularly those from disadvantaged
for research. London: Nuffield Foundation. backgrounds, to develop their skills and confidence
in quantitative and scientific methods.
ISBN: 978-1-9160211-0-5
We have recently established the Ada Lovelace
The Nuffield Foundation has funded this project, but Institute, an independent research and deliberative
the views expressed are those of the authors and not body with a mission to ensure data and AI work
necessarily those of the Foundation. for people and society. We are also the founder
and co-funder of the Nuffield Council on Bioethics,
which examines and reports on ethical issues
Acknowledgements in biology and medicine.

The authors are grateful to the following people for We are a financially and politically independent charitable
their input at workshops and valuable feedback on drafts trust established in 1943 by William Morris, Lord Nuffield,
of this report: Haydn Belfield, Jude Browne, Sarah Castell, the founder of Morris Motors.
Jennifer Cobbes, Damian Clifford, Matthew Crosby,
Naomi Fern, Danit Gal, Julian Huppert, Stephen John, Copyright © Nuffield Foundation 2019
Martina Kunz, Nóra Ní Loideáin, Jonnie Penn, Huw Price,
Diana Robinson, Henry Shevlin, Jeffrey Skopek, 28 Bedford Square, London WC1B 3JS
Adrian Weller, Alan Winfield. T: 020 7631 0566

Reviewers: Alan Wilson, Sophia Adams-Bhatti, Registered charity 206601


Natalie Banner, Josh Cowls, Claire Craig, Tim Gardam,
Helen Margetts, Natasha McCarthy, Imogen Parker, www.nuffieldfoundation.org
Reema Patel, Hetan Shah, Olivia Varley-Winter, www.adalovelaceInstitute.org
Antony Walker. @NuffieldFound | @AdaLovelaceInst
1

Foreword

This report sets out a broad roadmap for work on the The message emerging from the roadmap is that the study
ethical and societal implications of algorithms, data and of the questions it sets out must be plural, interdisciplinary,
AI (ADA). Their impact on people and society shapes and connect different interests across academic research,
practically every question of public policy, but discussion public policy, the private sector and civil society. This is very
is not necessarily based on a shared understanding of much at the heart of the Ada Lovelace Institute’s mission.
either the core ethical issues, or an agreed framework that One of Ada’s core aims is to convene diverse voices to
might underpin an ethical approach to the development create a shared understanding of the ethical issues arising
and deployment of ADA-based technologies. Even where from data and AI, and an interdisciplinary and collaborative
there is a broad consensus on core issues, such as bias, approach will be central to its operation.
transparency, ownership and consent, they can be subject
to different meanings in different contexts – interpretation As an independent funder with a mission to advance
in technical applications differs to that in the judicial system, social well-being, the Nuffield Foundation is keen
for example. Similarly, ethical values such as fairness can be to fund more research in this area. The question
subject to different definitions across different languages, of how digital technologies, and their distributional
cultures and political systems. effects, can alleviate, exacerbate and shift vulnerability
and affect concepts of trust, evidence, and authority
Clarifying these concepts, and resolving the tensions and is one of the themes prioritised in our strategy.
trade-offs between the central principles and values in We hope that this roadmap will help to generate
play, is crucial if we want ADA-based technologies to be relevant research proposals.
developed and used for the benefit of society. The roadmap
identifies, for the first time, the directions for research that I thank the authors of this report for delivering
need to be prioritised in order to build a knowledge base an intellectually stimulating and, at the same time
and shared discourse that can underpin an ethical approach. practical, contribution to this important field.
For each of the key tasks identified, the authors provide
detailed questions that, if addressed, have the collective
potential to inform and improve the standards, regulations
and systems of oversight of ADA-based technologies.

The Nuffield Foundation has recently established – Tim Gardam


in partnership with others – the Ada Lovelace Institute Chief Executive
(Ada), an independent research and deliberative body
with a mission to ensure data and AI work for people
and society. In commissioning this roadmap, our intention
was to inform both Ada’s work programme, and to help
shape the research agenda on the increasingly important
question of how society should equitably distribute the
transformative power and benefits of data and AI while
mitigating harm.
2

Executive Summary

The aim of this report is to offer a broad roadmap for 2. Identifying and resolving tensions between the ways
work on the ethical and societal implications of algorithms, technology may both threaten and support different
data, and AI (ADA) in the coming years. It is aimed at values, by:
those involved in planning, funding, and pursuing research
and policy work related to these technologies. We use a. Exploring concrete instances of the following tensions
the term ‘ADA-based technologies’ to capture a broad central to current applications of ADA:
range of ethically and societally relevant technologies
based on algorithms, data, and AI, recognising that these i. Using algorithms to make decisions and
three concepts are not totally separable from one another predictions more accurate versus ensuring fair
and will often overlap. and equal treatment.

A shared set of key concepts and concerns is emerging, ii. Reaping the benefits of increased personalisation
with widespread agreement on some of the core issues in the digital sphere versus enhancing solidarity
(such as bias) and values (such as fairness) that an ethics and citizenship.
of algorithms, data, and AI should focus on. Over the last two
years, these have begun to be codified in various codes and iii. Using data to improve the quality and efficiency
sets of ‘principles’. Agreeing on these issues, values and high- of services versus respecting the privacy and
level principles is an important step for ensuring that ADA- informational autonomy of individuals.
based technologies are developed and used for the benefit
of society. iv. Using automation to make people’s lives more
convenient versus promoting self-actualisation
However, we see three main gaps in this existing work: and dignity.
(i) a lack of clarity or consensus around the meaning
of central ethical concepts and how they apply in specific b. Identifying further tensions by considering where:
situations; (ii) insufficient attention given to tensions
between ideals and values; (iii) insufficient evidence on i. The costs and benefits of ADA-based technologies
both (a) key technological capabilities and impacts, and may be unequally distributed across groups,
(b) the perspectives of different publics. demarcated by gender, class, (dis)ability, or ethnicity.

In order to address these problems, we recommend ii. Short-term benefits of technology may come
that future research should prioritise the following broad at the cost of longer-term values.
directions (more detailed recommendations can be found
in section 6 of the report): iii. ADA-based technologies may benefit individuals
or groups but create problems at a collective level.
1. Uncovering and resolving the ambiguity inherent
in commonly used terms (such as privacy, bias, and c. Investigating different ways to resolve different kinds
explainability), by: of tensions, distinguishing in particular between those
tensions that reflect a fundamental conflict between
a. Analysing their different interpretations. values and those that are either illusory or permit
practical solutions.
b. Identifying how they are used in practice in different
disciplines, sectors, publics, and cultures. 3. Building a more rigorous evidence base for discussion
of ethical and societal issues, by:
c. Building consensus around their use, in ways that
are culturally and ethically sensitive. a. Drawing on a deeper understanding of what is
technologically possible, in order to assess the risks
d. Explicitly recognising key differences where consensus and opportunities of ADA for society, and to think
cannot easily be reached, and developing terminology more clearly about trade-offs between values.
to prevent people from different disciplines, sectors,
publics, and cultures talking past one another.
3

b. Establishing a stronger evidence base on


the current use and impacts of ADA-based
technologies in different sectors and on different
groups – particularly those that might be
disadvantaged, or underrepresented in relevant
sectors (such as women and people of colour)
or vulnerable (such as children or older people) –
and to think more concretely about where and how
tensions between values are most likely to arise and
how they can be resolved.

c. Building on existing public engagement work to


understand the perspectives of different publics,
especially those of marginalised groups, on important
issues, in order to build consensus where possible.
Contents 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2. The current landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3. Concept building. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4. Exploring and addressing tensions. . . . . . . . . . . . . . . . . . . . . . . 19
5. Developing an evidence base . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6. Conclusion: A roadmap for research. . . . . . . . . . . . . . . . . . . . . 36

Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Appendix 1: Summary of literature reviews . . . . . . . . . . . . . . . . 46
Appendix 2: Groupings and principles. . . . . . . . . . . . . . . . . . . . . 51
Appendix 3: Different perspectives. . . . . . . . . . . . . . . . . . . . . . . . 55
6

1. Introduction

1.1 Aims, approach and outline This report is organised as follows:

The aim of this report is to offer a roadmap for work on • Section 2 provides a high-level summary of the current
the ethical and societal implications of algorithms, data, and landscape, based on a more detailed literature review,
AI (ADA) in the coming years. We review what progress has which can be found in appendix 1. We highlight some of
been made in understanding these issues across academia, the successes of research so far, and some of the gaps
policy, and industry, identify gaps in the current research that still exist. We conclude that the road forward is in
landscape, and assess the strengths and limitations of existing particular need of three broad types of work: concept
work. On this basis, we recommend three broad areas of building, identifying and resolving tensions and trade-
research, and highlight specific priority questions within each offs, and building a stronger evidence base around
of the three areas. These recommendations, and the report these tensions.
in general, are aimed at individuals and organisations involved
in planning, funding, and pursuing research and policy work • Sections 3–5 focus on each of these recommended
related to the emerging ethical and societal challenges raised areas of work in turn: explaining in more detail why this
by algorithms, data and AI. Our focus is on the short- to is a priority, what research in this area should consider
medium-term issues that have already emerged or are in general, and what specific questions or areas seem
in the process of emerging at this time; we do not focus particularly important. Section 6 draws the conclusions
on solutions that involve radical political or technological of the preceding sections together to present a ‘roadmap’:
transformation.1 We also focus primarily on priorities for a set of high-level recommended research directions.
research rather than for the role of policy or regulation.
However, we urge that these options are also researched
and point out in this report how this might be done. 1.2 Definitions and key terms

To arrive at these recommendations, we began by The scope of this investigation was broad: to consider both
conducting a wide-ranging literature review of relevant the ethical and societal implications of algorithms, data, and
work in the English language: covering over 100 academic artificial intelligence.
papers, both theoretical and empirical, from disciplines
including (but not limited to) computer science, ethics, Ethical and societal implications
human-computer interaction, law, and philosophy. We We adopt a broad definition of ‘ethical and societal
also reviewed key policy documents from across several implications’, to consider the ways that algorithms, data
continents, and some of the most commonly cited popular and AI may impact various parts of society, and how
news and media articles from the last few years.2 We held these impacts may either enhance or threaten widely
three workshops (each bringing together at least twenty held values. We deliberately use the term ‘implications’
different experts from a range of relevant fields), and to capture that we are not just interested in the negative
a series of smaller discussion and brainstorming sessions impacts of these technologies (as alternative terms like
in groups of between five and 10 people. ’issues’, ‘risks’, or ‘challenges’ might suggest), but also the

1 Other groups have focused on prioritising research on related, longer-term challenges of advanced AI systems, most notably the Future of Humanity Institute
at Oxford University, which recently published a research agenda for long-term AI governance. It would be valuable for future work to look at the interrelations
between short- and longer-term research priorities, and how the two might better learn from one another.

2 We chose media articles that we either found to be cited repeatedly in the academic literature we reviewed, or those that were written in the last year in
high-profile outlets such as the New York Times, the Guardian, or TechCrunch. A more systematic and comprehensive review of media outlets was beyond
the scope of this initial review, but a broader analysis in follow-up work could strengthen our assessment of the space.
7

positive impacts they might produce. This is crucial, as encoded digitally rather than analogically. Data is ethically
we will later emphasise the importance of considering and societally relevant for three reasons. First, the process
the tensions that arise between the opportunities and risks of collecting and organising data itself requires making
of technologies based on algorithms, data, and AI. assumptions about what is significant, worthy of attention,
or useful. Since these assumptions are unlikely to hold
Values in all contexts, no dataset is fully complete, accurate, or
When we speak about recognising conflicts between neutral. Second, digitally encoded data allows information
different values enhanced or endangered by new to be duplicated, transferred, and transformed much more
technologies, we use the term ‘values’ to pick out efficiently than ever before. Third, new forms of analysis
commitments that are deeply held and reasonably widely allows those possessing large amounts of data to acquire
shared. Values are not mere desires, revealed preferences3 novel insights.
or pleasures. They are goals and ideals that people endorse
thoughtfully and defend to each other as appropriate, and Artificial Intelligence
which motivate ways in which they organise communal Of the three key terms explored in this report, ‘artificial
life.4 Here we concentrate on those values that have been intelligence’ (AI) is probably the hardest and most
invoked especially frequently in the recent anglophone controversial to define. The word ‘intelligence’ is used
debates about emerging AI and data-based technologies, in many different ways both in ordinary discourse and
but also try to identify those that resonate more widely across a number of different academic fields, often with
across cultures. politically loaded connotations.5 For the purpose of this
report, we take ‘artificial intelligence’ to refer to any
Algorithms technology that performs tasks that might be considered
In mathematics and computer science, the term ‘algorithm’ intelligent – while recognising that our beliefs about what
means an unambiguous procedure for solving a given class counts as ‘intelligent’ may change over time. For example,
of problems. In this report, we primarily use ‘algorithm’ we don’t intuitively think of visual perception or walking
to mean something closer to ‘automated algorithm’: as particularly ‘intelligent’ tasks, because they are things
a procedure used to automate reasoning or decision-making we do with little conscious effort: but attempting to
processes, typically carried out by a digital computer. Often replicate these abilities in machines has shown they
we will simply use ‘algorithm’ as a shorthand to refer to actually rely on incredibly complex processes. We also
the software that implements this procedure, and terms consider the key feature of AI most relevant to ethics
like ‘algorithmic decision-making’ more or less as a synonym and society: the fact that AI can often be used to optimise
for computerised decision-making. For our purposes, the processes and may be developed to operate autonomously,
key aspect of algorithms is that they can be automated, creating complex behaviours that go beyond what is
and can be executed systematically at much higher speeds explicitly programmed.
than humans, also automatically triggering many other
procedures as a result. Publics
The term ‘public’ is often taken for granted as a catch-all
Data term for ‘every person in society’. We, on the other hand,
We define data as ‘encoded information about one or use the term ‘publics’ in plural to emphasise that different
more target phenomena’ (such as objects, events, processes, interest groups (scientists, mediators, decision-makers,
or persons, to name a few possibilities). Today, data is usually activists, etc.) bring their own distinct perspectives.6

3 That is, we do not think it is possible to infer values simply by observing how they behave in a market place.

4 Tiberius (2018) provides a fuller definition.

5 See Cave (2017).

6 Here we follow Burns et al. (2003), who distinguish different publics as being relevant for different contexts.
8

This allows us to avoid focusing on the dominant views


and attitudes at the expense of those coming from the
margins. We do not use the term ‘lay public’ in opposition
to ‘experts’ in recognition of the fact that many different
groups have relevant expertise.

With these definitions in hand, we can clarify why the


ethical and societal implications of ADA-based technologies
motivate concern. ADA-based technologies are dual-
use in nature: the purpose to which they are initially
developed can easily be changed and transferred, often
radically altering their moral valence. For example, image
recognition techniques have clearly positive applications
such as in the identification of malignant tumours, but can
also be repurposed in ways that could be harmful, such
as for mass surveillance (Bloomberg News, 2018). Relatedly,
ADA-based technologies involve inherently domain-neutral
capacities, such as information processing, knowledge
acquisition and decision-making. Thus, the same techniques
can be applied to almost any task, making them increasingly
pervasive and permeable across different parts of society.
The same technology could carry very different risks and
benefits in different application areas, for different publics,
touching on different values.

These features, together with the remarkable speed


with which powerful private companies have pioneered
new applications of ADA-based technologies in the recent
decade, explain the increase in focus on the need to
regulate and guide the ethics of ADA in the right direction.
9

2. The current landscape

Discussion of the ethical and societal implications of science morality


algorithms, data, and AI does not fit neatly into a single
academic discipline, or a single sector. To understand the
transparency workforce

full range of recent coverage of these issues, we therefore


inclusion accountability rights
law regulation interaction misuse
need to look very broadly: at academic publications from impact governance government
philosophy and political science to machine learning, and labor collaborations considerations connected inequalities
beyond academia to policy, industry, and media reports. complex responsibility ethics testing
Our review focused on understanding two main things:
fairness
risk
human machine
(1) what specific issues and concerns are given attention autonomy
targeting
fair social
across different types of publication, and (2) what attempts
stereotyping media asymmetries public
bias
future
democratic bubbles ownership
have been made to synthesise issues across disciplines economic safety
statistical
and sectors.
societal
accountable economy liberties
values
privacy security intelligencecertification
We drew two main conclusions from this review.
power harm interpretability
access
good
First, a shared set of key concepts and concerns is society automation

emerging, but the terms used are often ambiguous manipulation transparent

or used unreflectively. Second, several different


Figure 1. Word cloud of emerging shared concepts, based
attempts to synthesise issues into frameworks and
on the frequency of words used in the groupings and
sets of principles exist,7 but are often unsystematic
frameworks of several key reports and organisations.
or too high-level to guide practical action.8
See appendix 2 for details.

2.1 Identifying key concepts and concerns However, as terms rise in popularity, they may be used
unreflectively or ambiguously. For example, commentators
While the existing literature covers a wide range frequently champion the importance of ‘transparency’
of issues, a shared set of key concepts and concerns without clarifying exactly what they mean by it or why
is nonetheless emerging. Concerns about algorithmic it is important. There is also inconsistency in the meanings
bias and ensuring that machine learning that supports attached to these terms in different contexts: for example,
decisions about individuals is used fairly, for example, ‘bias’ might mean something quite precise in a technical
have become a centrepiece of these discussions, as has paper, but something more vague in a policy report.
an emphasis on the importance of making ‘black box’ We discuss how different uses and interpretations
systems transparent and explainable. Issues of personal of the same terms may cause problems in section 3.
data privacy also arise repeatedly, as do questions of
how we maintain accountability and responsibility as Although consensus on key issues is emerging, disciplines
more and more decisions become automated. The of course differ in their areas of emphasis. Unsurprisingly,
impact of ADA-based technologies on the economy computer science and machine learning research focuses
and implications for the future of work are further mostly on those ethical issues that can most easily be
themes that arise frequently. See figure 1 for an illustration framed in technical terms: including how to make machine
of the most common terms used in recent attempts learning systems more interpretable and reliable, and issues
to list the key issues arising from ADA. This word cloud of privacy and data protection. Philosophy and ethics papers
is based on the frequency of terms as they arise in often focus on questions about the moral significance of
the various frameworks and categories we reviewed, more advanced AI systems that could exist in the future,
with larger words occurring more frequently. with less attention paid to the ethical challenges of current

7 See for example Cowls and Floridi (2018) for a framework-focused approach, and the House of Lords Select Committee on AI’s (2018) report for a principles-
focused approach.

8 This section is restricted to providing a high-level assessment of the current landscape of ADA ethics and societal impacts. For more detailed descriptions and
assessments, see appendices 1–4.
10

technologies – though a body of literature on these more categories. Does ‘accountability’ belong in the
near-term issues is emerging in fields such as information same category as ‘fairness and transparency’
and technology ethics. Academic law literature does much (as the Partnership on AI have it), or should it fall into
more than other areas we reviewed to try to pull apart a separate category with issues of governance and regulation
different interpretations of different terms such as ‘privacy’ (as DeepMind have it)? Should ‘trust’ be categorised
and ‘fairness’, and to discuss the implications of these with either ‘transparency’, or ‘fairness’, or ‘privacy’ –
different meanings. or should all these issues be lumped together? Should
‘AI for social good’ be a category of its own or does
When we look beyond research papers tackling specific it cut across all the other categories? What issues
issues, towards high-level attempts to synthesise a range might not fit neatly into any of these categories at
of issues, we find many take similar approaches to grouping all (such as AI Now’s notion of ‘rights and liberties’12)?
or categorising these issues.9 For example, many similarities
can be seen between the categories that DeepMind Ethics Without an understanding of why these issues and
and Society (DMES) and the Partnership on AI (PAI) use categories have been chosen and not others, it is difficult
to define their research areas: to be confident that all the relevant issues have been
captured. It is not clear whose values and priorities
are being promoted, and whether the concerns of
DMES Research PAI Thematic all members of society – including minority groups –
Themes10 Pillars11 are being represented. Some groups and papers are
beginning to take an approach that starts with a more
Privacy, transparency Fair, transparent and fundamental map of the ethical landscape: for example,
and fairness accountable AI a 2018 report from the EDPS Ethics Advisory Group,
‘Towards a Digital Ethics’, systematically considers
Economic impact, inclusion, AI, labor, and the economy each of the ‘European’ values, and how they might be
and equality threatened by the features of an increasingly digital world.
This highlights some questions that have not been given
Governance and Social and societal so much attention, such as how individualised profiling
accountability influences of AI might threaten solidarity in society, or how the availability
of data might worsen power imbalances between
AI morality and values AI and social good
governments and companies on the one hand, and
individuals on the other.
Managing AI risk, Safety-critical AI
misuse, and unintended
Efforts like these aim to produce a single theoretical
consequences
framework that can be presented as a single list
AI and the world’s Collaborations between of principles and values. While unity is valuable for
complex challenges people and AI systems some endeavours (e.g. for coordination and public
accountability), it can also restrict attention: highlighting
some issues at the expense of masking others. For
Though these groupings hint at some underlying now, it is clear that there remain many possible ways
structure, as they stand they are relatively unsystematic. to carve up this space, each of which will have different
This is illustrated by the subtle differences in how advantages and disadvantages, and prioritise some
different groups place the boundaries of their values above others.13

9 See appendix 2 for recently proposed lists of key issues.

10 https://deepmind.com/applied/deepmind-ethics-society/research/

11 www.partnershiponai.org/about/#our-work

12 https://ainowinstitute.org/research.html

13 For a more detailed assessment of the strengths and weaknesses of different approaches to organising the issues, see appendix 2. Appendix 3 contains several
of perspectives which can be used to restrict discussions to a more limited set of issues.
11

2.2. Formulating ethical principles responsibility socially dignity


In addition to these explorations of key concepts,
benefit fairness scientific
alignment promoting
testing
design humanity
various groups have also begun to publish sets of provenance
prescriptive principles or codes to guide the development security standards responsible
contribution communication
and use of ADA-based technologies. These principles interpretable awareness auditability
empower common intelligibility prosperity society
often overlap with and include concepts mentioned in
the previous section, but focus less on articulating what
the ‘issues’ are, and instead on articulating some goals
shared
people
stakeholders
privacy benefits
measurability
data
challenges
rights
failure
for the use and development of technology.
dialogue safety arms
educate
human respect
safeguards
For example, the Asilomar AI principles developed in flourish
2017 in conjunction with the Asilomar conference for tested accountable
Beneficial AI,14 outline guidelines on how research should excellence
engaged beneficial
be conducted, ethics and values that AI must respect, and accountability transparency
important considerations for thinking about longer-term
issues. The principles were signed by several thousand
Figure 2. Word cloud of concepts frequently occurring
AI researchers and others, including many academic
in principles and codes, based on the frequency of words
ethicists and social scientists. The Partnership on AI has also
used in the principles outlined in appendix 2.
established a set of ‘tenets’ to guide the development and
use of AI technologies, which all members – including many
of the most prominent technology companies – endeavour their rights, and should respect some of the widely-held
to uphold.15 values mentioned above such as fairness, privacy, and
autonomy. There have also been attempts to synthesise them
In addition, governments and international bodies are into a short list of key principles (e.g. beneficence, non-
developing their own principles: a recent report from the maleficence, autonomy, justice, and explicability)19 modelled
Lords Select Committee on Artificial Intelligence, ‘AI in on a prominent tradition within biomedical ethics.
the UK: ready, willing, and able?’16 suggests five principles
for a cross-sector AI code which could be adopted Principles are a valuable part of any applied ethics: they
internationally. The IEEE Standards Association has also help to condense complex ethical issues into a few central
launched a ‘Global Initiative on Ethics of Autonomous and elements which can allow widespread commitment to
Intelligent Systems’17, and has developed a set of general a shared set of values. They can also provide an informal
principles to guide ethical governance of these technologies. means of holding people and organisations accountable, to
Industry is also getting involved: most prominently with reassure public concerns. For example, the machine learning
Google publishing its ‘AI ethics principles’ in June this year.18 community has mobilised over the issue of autonomous
Figure 2 illustrates the key terms that arise across all these weapons in the past year, with many groups and individual
sets of principles we reviewed, again where word size researchers making public commitments not to be
corresponds to frequency. involved in their development. This is the case where joint
commitment to a specific and action-guiding principle can
There is substantial overlap between these different sets of have a real impact on the ethical implications of technology.
principles. For example, there is widespread agreement that
ADA-based technologies should be used for the common However, most of the principles proposed for AI ethics
good, should not be used to harm people or undermine are not specific enough to be action-guiding. While these

14 https://futureoflife.org/ai-principles/. Some of the authors of this report were present at that conference and involved in the development of the principles.

15 www.partnershiponai.org/tenets/. The Leverhulme Centre for the Future of Intelligence, at which the authors of this report are based, is a member of the
Partnership on AI.

16 Some of the authors of this report gave evidence to the Committee.

17 https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html. Some of the authors of this report have been involved with this initiative.

18 https://ai.google/principles/

19 Cowls and Floridi (2018).


12

principles do reflect agreement about which aims are that favour the values and needs of the majority at
important and desirable as the development and use the expense of minorities. It is not enough to agree
of ADA-based technologies advances, they do not provide that we must preserve human autonomy, for example:
practical guidance to think through new and challenging we need to develop a rigorous understanding of the
situations. The real challenge is recognising and navigating specific ways that technology might undermine autonomy
the tensions between principles that will arise in practice. now and in future, and in what contexts different people
For example, a truly beneficial application of AI that could might be willing to sacrifice some amount of autonomy
save lives might involve using personal data in a way that for other goods.
threatens commonly held notions of privacy, or might
require us to use algorithms that we cannot entirely
explain. Discussion of principles must begin to acknowledge 2.4 Summary and recommendations
these tensions, and provide guidelines for how to navigate
the trade-offs they introduce.20 To summarise:

• A useful set of shared concepts is emerging, but


2.3 Underlying assumptions and knowledge gaps is currently based on ambiguous terms often used
unreflectively. There are important ambiguities in many
It is also worth noting several assumptions implicit in current of the terms often used, which may mask significant
discussions, as they reveal gaps in the existing knowledge differences in how concepts are understood by
about what is, and what will be, technically possible, and different disciplines, sectors, publics and cultures.
about the values of different groups in society.
• Important codes and principles are being established,
For example, concerns about algorithmic bias often but there is little recognition of the tensions that will
presuppose that the human decision-makers being inevitably be encountered in putting these principles
replaced by algorithms are not equally or even more into practice: when values come into conflict with
biased. While this comparison is sometimes raised, it is one another, when there are conflicts between
rarely investigated systematically when we should expect the needs of different groups, or when there are
algorithmic systems to do better than humans, and resource limitations.
when they merely perpetuate or reinforce human biases.
Emphasis on algorithmic transparency assumes that some • Current discussion of issues and principles often
kind of ‘explainability’ is important to all kinds of people, rely on implicit assumptions about what is technically
but there has been very little attempt to build up evidence possible, how technology is impacting society, and
on which kinds of explanations are desirable to which people what values society should prioritise. To put principles
in which contexts. Discussions of the future of work are into practice and resolve these tensions, it is crucial
often underpinned by assumptions about the benefits and to identify and challenge these assumptions, building
harms of different forms of automation, but lack substantive a stronger and more objective evidence base for
evidence on either the objective benefits and harms of understanding underlying technological capabilities,
automation so far, or public opinion on these topics. societal impacts and societal needs.

Putting principles into practice and resolving tensions Substantial progress has been made over the last
will require us to identify these kinds of assumptions few years on understanding the ethical and societal
and fill knowledge gaps around technological capabilities, implications of ADA, the challenges and questions
the impacts of technology on society, and public opinion. these raise, and how we might address them. The
Without understanding current applications of ADA-based road ahead needs to focus on:
technologies and their impacts on society, we cannot clearly
identify the issues and tensions which are most pressing. • Building a shared understanding of key concepts
Without understanding what is technologically feasible, that acknowledges and resolves ambiguities,
it is difficult to have a meaningful discussion about what and bridges disciplines, sectors, publics and
trade-offs exist and how they might be navigated. And cultures. In section 3, we begin to unpack some
without understanding the perspectives of various of the terminological overlaps, different uses and
different groups in society, we risk making trade-offs

20 We make a full case for this in Whittlestone et al (2019).


13

interpretations, and conceptual complexities which


contribute to confusion and disagreement.

• Identifying and exploring the tensions that arise


when we try to put agreed-upon principles into practice.
In section 4 of this report, we begin to do exactly this:
identifying and unpacking in detail several tensions that
are illustrative of the conflicts emerging in this space
more broadly, and outlining some guidelines for resolving
these tensions.

• Deepening understanding of technological capabilities,


societal impacts, and the perspectives of different
groups, in order to better understand the issues that
arise and how to resolve them. In section 5, we explain
why understanding and challenging assumptions about
technology and society is crucial for resolving tensions,
and highlight some priority areas for research.

In each of the proceeding sections, we also highlight research


priorities and recommendations for future work.
14

3. Concept building

An important obstacle to progress on the ethical and or algorithms which (in some sense) disadvantage
societal issues raised by ADA is the ambiguity of many certain individuals or groups. Again, it is not clear that all
central concepts currently used to identify salient issues. As cases referred to by these terms involve the same type
reviewed in section 2, concepts like ‘fairness’, ‘transparency’ of problem.22
and ‘privacy’ figure prominently in the existing literature.
While they have served to highlight common themes Some research has begun to untangle these overlaps.23
emerging from case studies, many of these terms are For example, Barocas (2014) distinguishes three kinds of
overlapping and ambiguous. This stems partly from the fact concerns for algorithms based on data-mining, which have
that different fields, disciplines, sectors, and cultures can use been raised under the heading ‘discrimination’:
these concepts in substantially different ways, and partly from
inherent complexities in the concepts themselves. As a result, 1. Cases where deployers of an algorithm deliberately
discussions of the ethical and societal impacts of ADA risk attempt to disadvantage certain users and make this
being hampered by different people talking past each other. difficult to detect (e.g. by hiding the critical bit of code
within a complicated algorithm).
Making constructive progress in this space requires
conceptual clarity, to bring into sharper focus the values 2. Cases where data-mining techniques produce errors
and interests at stake. In this section we outline in detail the which disadvantage certain users (e.g. due to unreliable
different challenges that need to be overcome in order to input data or users drawing faulty inferences from the
achieve this conceptual clarity. algorithms’ output).

3. Cases where an algorithm enhances decision-makers’


3.1 Terminological overlaps ability to distinguish and make differential decisions
between people (e.g. allowing them to more accurately
One challenge is that different terms are often used identify and target financially vulnerable individuals for
to express overlapping (though not necessarily further exploitation).
identical) phenomena.
Disentangling different issues lumped together under
For example, the terms ‘transparency’, ‘explainability’, a single term in this way is an important first step towards
‘interpretability’, and ‘intelligibility’ are often used conceptual clarification, as different types of issues arguably
interchangeably to refer to what ‘black-box’ algorithms are require different types of remedy.
thought to be missing. Commentators have pointed out that
these terms can refer to a number of distinct problems.21
Is the problem that companies or state agencies refuse to 3.2 Differences between disciplines
share their algorithms? Or that the models themselves are
too complex for humans to parse? And are we talking about A further challenge stems from the fact that some of the
any human or merely people with the relevant scientific most widely used terms have different connotations and
knowledge or expertise? While all of these questions may meanings in different contexts.
in a loose sense be said to involve problems of transparency,
they raise different kinds of challenges and call for different For example, in statistics a ‘biased sample’ means
kinds of remedies. a sample that does not adequately represent the
distribution of features in the reference population
Similarly, the terms ‘bias’, ‘fairness’, and ‘discrimination’ are (e.g. it contains a higher proportion of young men than
often used to refer to problems involving datasets in the overall population). In law and social psychology,

21 Burrell 2016; Lipton (2016); Weller (2017); Selbst & Barocas (2018).

22 Barocas (2014); Binns (2017).

23 E.g. Barocas (2014); Burrell (2016); Weller (2017); Zarsky (2016); Mittelstadt et al (2016).
15

by contrast, the term ‘bias’ often carries the connotation 3.3 Differences across cultures and publics
of negative attitudes or prejudices towards a particular
group. In this sense, a dataset which is ‘unbiased’ (in the In addition to disciplinary differences, key concepts may be
statistical sense) may nonetheless encode common biases understood differently or carry different connotations across
(in the social sense) towards certain individuals or social different cultures and publics.
groups. Distinguishing these different uses of the same
term is important to avoid cross-talk.24 One example that has been highlighted is the concept
of privacy. Whereas modern Western ethical traditions
Apart from these terminological issues, different (e.g. Kantianism) tend to conceive of individual privacy
disciplines also embody different research cultures as an intrinsic good, this is often not the case in Eastern
that can affect the clarification and refining of traditions. In Confucianism, which tends to emphasise the
ambiguous concepts. For instance, many machine collective good over the individual, the notion of individual
learning researchers would naturally seek to construct privacy (as opposed to the collective privacy e.g. of a family)
a mathematically precise definition of, say, ‘fairness’,25 has traditionally been given less attention (and may even
whereas qualitative social scientists would often seek to carry negative connotations e.g. of shameful secrets).
highlight the rich differences in how different stakeholders In a different vein, as traditional Buddhism regards the belief
understand the concept. Similarly, philosophical ethicists in an autonomous self as a pernicious illusion, some Buddhist
often seek to highlight inherent dilemmas and in- traditions have argued one should actively share one’s
principle problems for different definitions of a concept, secrets as a means to achieving a lack of self.26
whereas many lawyers and researchers from other
policy-oriented disciplines would look for operational Of course, it is important to avoid the assumption
definitions that are good enough to resolve that everyone within a cultural tradition shares the
in-practice problems. same concept, and that the attitudes of an entire culture
can be reduced to whatever is expressed in its dominant
These differences in approach are in part motivated philosophical or religious traditions. As long as they are
by what problems the methodologies available to recognised as tendencies, exploring these differences will
different disciplines are best suited to solve, and the be important for understanding the varied connotations
kinds of research that are valued within different for different groups of the concepts used to discuss ADA-
fields. Furthermore, different strategies for concept based technologies. However, there is also a need for more
building tend to align with different strategies for empirical work (e.g. surveys, interviews, anthropological
resolving ethical and societal problems. For example, studies) on conceptual variations between and within
conceiving such problems as purely technical in different countries and cultures.
nature, where value judgements are used only in the
specification of the problem, as opposed to conceiving These points are not only applicable to different cultures
them as political problems, which require stakeholders as defined for example by a specific national, linguistic or
to negotiate and compromise. religious community. Key ethical and political concepts
may also be more or less visible, and may have different
Attempts to clarify key concepts relating to the ethical connotations in, different intersecting groups or publics
and societal challenges of ADA should take heed of these within and across cultures, such as gender, sexuality, class,
disciplinary differences and not inadvertently prioritise ethnicity, and so on. A useful illustration is the argument
specific research or policy approaches by default. by second-wave feminists encapsulated in the slogan ‘the

24 Barocas & Selbst (2016); London and Danks (2017).

25 For example, Zafar et al. (2017); Kusner et al. (2017); Kearns et al. (2017).

26 For more on differences between Eastern and Western conceptions of privacy, see Ess (2006). See also the IEEE’s Ethically Aligned Design, v.2, pp. 193–216,
which discusses the implications for ADA of several ethical traditions, including both secular traditions (e.g. utilitarianism, virtue ethics, deontology)
and religious/cultural traditions such as Buddhism, Confucianism, African Ubuntu and Japanese Shinto.
16

personal is political’.27 Second-wave feminists criticised the Some theories focus on achieving a fair distribution of
traditional conception of the private sphere as personal outcomes between groups. Of course, we still need to say
and apolitical in contrast to the political public sphere, a what it is that makes a distribution of outcomes fair: different
distinction which in Western thought traces back to ancient subtheories argue that the fairest distributions are ones that
Greece (Burch 2012, ch. 8). Among other things, this maximise overall benefit (utilitarianism), ones that are as
traditional conception has often led nonmarket housework equal as possible (egalitarianism), or ones that benefit the
and childcare to be considered irrelevant (or simply ignored) worst-off the most (minimax). Other theories of fairness
in discussions of labour and economics (for instance, these focus less on any particular distribution of outcomes and
are not measured in GDP).28 It also resulted in failure to instead emphasise how those outcomes are determined:
name certain phenomena only visible to the marginalised whether the benefits or disadvantages an individual
(such as sexual harassment). receives are the result of their own free choices, or result
from unlucky circumstances beyond their control such as
This example illustrates how debates about the future historical injustices towards specific groups or individuals.30
of work in particular, and technology in general, should
take into account a broad range of perspectives on what These differences are relevant to how we think about
is involved and valuable in concepts such as ‘labour’, the impact of ADA on fairness. For instance, suppose we
‘leisure’ or ‘spare time’. More generally, different publics are concerned with whether an algorithm used to make
within society will differ in their understanding and healthcare decisions is fair to all patients. On a purely
valuation of key concepts involved in debates about ADA. egalitarian conception of fairness, we ought then to assess
Understanding these differences, and ensuring that the whether the algorithm produces equal outcomes for all
values of all members of society are represented, will be key users (or all relevant subgroups – at which point we have
to navigating these debates. to ask which are the relevant subgroups). On a minimax
conception (i.e. maximising benefits for the worst off), by
contrast, we should instead ensure the algorithm results
3.4 Conceptual complexity in the best outcomes for the worst off user group, even
if this leads to a greater disparity between the outcomes
However, merely distinguishing different uses and for different groups, or produces worse results on average.
interpretations is in itself unlikely to resolve these conceptual Adopting a conception of fairness based on free choice
tangles. While many morally significant terms can seem would instead require us to decide which conditions are
intuitively clear and unproblematic, philosophical analyses truly free choices and which are merely lucky circumstance.
often reveal deeper conceptual complexities. For example, is smoking, or obesity, a free choice? Simply
stating that the algorithm should be ‘fair’ would fail to
Take the concept of fairness again. This is often highlighted distinguish between these different potential meanings
as being the key value at stake in cases of algorithmic bias. of the concept.31
The terms ‘bias’ and ‘fairness’ are often conflated, with
some discussions of such cases simply defining bias as Similar conceptual complexities can be found in most
unfair discrimination.29 Yet there is no uniform consensus of the key terms framing debates around the impacts
within philosophy on an exact definition of fairness. Political of ADA. What does it mean for an algorithmic decision-
philosophers have defended several different definitions, making system to have ‘intelligibility’ and why is this an
each drawing on different intuitions associated with important feature for such systems to possess? What
the concept. counts as ‘personal’ data and why is it important to

27 See, for example, Hanisch (2006, 1969). Similar arguments and slogans were used across a number of political movements in the 1960s and 1970s,
Crenshaw (1995).

28 GPI Atlantic (1999).

29 Friedman and Nissenbaum (1996).

30 Some legal systems grant special protections against discrimination or unequal treatment to groups defined by certain ‘protected characteristics’, such gender,
ethnicity or religion. This is sometimes justified on the grounds that these groups have historically been subjected to unfair discrimination. However, what makes
discrimination against such groups especially wrong is disputed. See, for example, Altman (2015).

31 See Binns (2017) for a fuller survey of different theories of fairness and their relation to machine learning.
17

protect the privacy of these (as opposed to ‘non-personal’ to clarify both (a) different possible interpretations and
data)? What counts as ‘meaningful’ consent? uses of a given concept, such as ‘transparency’, and also
(b) how important concepts are, in practice, used by
While rich philosophical literatures exist on most of these different groups and communities. To achieve (a), in-
concepts, there is relatively little work spelling out their depth philosophical analyses will sometimes be needed
application to how we talk and think about the ethical to uncover the conceptual complexities hiding under
implications of ADA. commonly used concepts. To achieve (b), this work
will need to intersect with relevant technical research,
Although clarification is an important step to making for example work on different possible mathematical
constructive progress, doing this will not always be definitions of fairness, and empirical social sciences
straightforward. Differences in the understanding of research to elucidate different understandings
key concepts sometimes reflect deeper, substantial of similar concepts within different disciplines
disagreements between groups who endorse fundamentally and between different cultures. We discuss such empirical
different values or have conflicting interests. For example, work further in section 5.
when libertarians prefer a choice-based conception of
fairness while social democrats prefer a distribution-based Most work of this kind has centred on the conceptual
conception, this is not merely a terminological dispute. clusters of bias/fairness/discrimination and transparency/
Rather, they fundamentally disagree about what justice explainability, intelligibility/interpretability, and to some
requires us to prioritise. extent privacy and responsibility/accountability. More
studies of this kind would be welcome and should be
Merely analysing and highlighting these differences is unlikely extended more systematically to other important concepts
to yield uncontroversial solutions to these disagreements. in this space (including those that we discuss in section 4,
Navigating such disagreements will often require political namely, dignity, solidarity, citizenship, convenience and
solutions, rather than mere conceptual analysis. For example, self-actualisation).
by designing political processes or institutions which can
be recognised as legitimate by different publics or interest 2. Bridging disciplines, sectors, publics and cultures
groups even when they disagree with individual decisions. Analysing and bringing these complexities and
Clarifying and analysing the key concepts can however help divergences into focus will help to mitigate the risk
distinguish cases where disputes are merely terminological, of cross-talking. However, we also need constructive
and identify where further work is needed to resolve or work aiming to bridge these differences, i.e. engaging
navigate substantial disagreements. relevant practitioners and stakeholders to make them
aware of relevant differences and actively enabling
communication across these divides.
3.5 Summary and recommendations
In terms of crossing disciplinary divides, mapping
Making progress in debates on ADA ethics and and communicating differences in use will help
societal impacts requires disentangling the different identify situations where researchers or practitioners
meanings of key terms used to frame these debates. are misunderstanding each other. Actively enabling
Three kinds of work are necessary to make progress communication will furthermore require interdisciplinary
on this task: collaborations where researchers can help each other
translate their findings to different target audiences.
1. Mapping and clarifying ambiguities Examples of such collaborations already taking place
The first kind of work needed is to understand the include papers co-authored between lawyers and
differences and ambiguities in the use of key concepts technical researchers. These can be taken as a template
surrounding debates of ADA. for further collaborations. Workshops that bring together
different disciplines to discuss key concepts could be
An important step towards this will be mapping another model for bridging these terminological and
exercises of the kind mentioned in 3.1, which seek language differences.
to disentangle and classify different types of problems
or cases that are currently lumped together under the Much of the current international debate around ADA
same terminology. These mapping exercises will need ethics emanates from Western countries and is framed
18

in terms of Western intellectual traditions.32 Work is


however ongoing in other cultural spheres, in particular
East Asia, at the forefront of ADA research. An important
step to integrate a fuller range of perspectives into
international debates will be to translate important
policy documents and research literature – both from
other languages into English, and the reverse. Ensuring
that major conferences and meetings have delegates from
a wide range of countries and other backgrounds will also
be important. Furthermore, work should be done to identify
research from other countries, in particular from developing
countries, whose perspectives are currently not strongly
represented. This should include building collaborations
with researchers and policy makers in those countries.

3. Building consensus and managing disagreements

Finally, work should be done to build consensus around


the best ways to conceptualise the ethical and societal
challenges raised by ADA. We should seek to find
common understandings and pieces of shared conceptual
machinery. These need not replace the existing frameworks
within disciplines or cultures, but should be ones that
different stakeholders can agree are good enough for
joint constructive action. Though we may not always
be able to agree on a single precise definition for every
term related to the ethics of AI, we can clarify meaningful
disagreements and prevent people from talking past
one another.

Many of the recommendations discussed in relation to


mapping and clarifying ambiguities and bridging disciplines
and cultures will contribute to this. However, while
traditional conceptual analysis provides an important
starting point for resolving ambiguities, settling on definitions
will require engaging with all stakeholders influenced by
technologies, including the public. We discuss ways of
involving the public in section 4.4.2.

It should be stressed that not all ethically relevant


disagreements can be resolved by purely conceptual means.
Some conceptual differences reflect deeper disciplinary,
cultural or political disagreements. To address these, we
will need to consider how to manage the tensions, trade-
offs and dilemmas to which these disagreements give rise.
We explore these in the next section.

32 See appendix 1 for comments on how discussion of these issues differs in developed versus developing countries.
19

4. Exploring and addressing tensions

The conceptual work described in section 3 aims to stake in applications of ADA. These express different
build clarity and consensus around the key concepts kind of aims which either motivate the use of ADA-
and principles of ADA ethics. This is an important starting based technologies for various purposes, or which such
point, but is not enough if these principles cannot be put technologies ought to preserve. Importantly, these aims
into practice, and it is not yet clear that the very high-level are multiple rather than one overall goal such as utility,
principles proposed for ADA ethics can guide action in goodness or human flourishing.
concrete cases. In addition, applying principles to concrete
cases often reveals obstacles to their implementation: These values are attractive ideals, but in practice they
they may be technically unrealisable, overly demanding, can come into conflict, meaning that prioritising one
or implementing them might endanger other things value can require sacrificing another. Developing more
we value. For instance, recent attempts to construct complex algorithms that improve our ability to make
definitions of fairness that are sufficiently mathematically accurate predictions about important questions may
precise to be implemented in machine learning systems reduce our ability to understand how they work, for
have highlighted that it is often mathematically impossible instance. The use of data-driven technologies might also
to optimise for different, intuitively plausible dimensions make it impossible for us to fully guarantee desirable
of fairness.33 It is therefore far from clear what it would levels of data privacy. But if the potential gains of these
mean to ensure that AI, data, and algorithms ‘operate technologies are significant enough – new and highly
on principles of fairness’34 in practice. effective cancer treatments, say – communities might
decide a somewhat higher risk of privacy breaches
To think clearly about ADA-based technologies and is a price worth paying.
their impacts, we need to shift our focus to exploring
and addressing the tensions that arise between different We use the umbrella term ‘tension’ to refer to
principles and values when trying to implement them different ways in which values can be in conflict,
in practice. While several of the existing discussions some more fundamentally than others (as elaborated
recognise the importance of facing these conflicts, none in section 4.4.1). Note that when we talk about tensions
do so systematically. For example, the Montréal Declaration between values, we mean tensions between the
on Responsible AI (2018) states that its principles ‘must pursuit of different values in technological applications
be interpreted consistently to prevent any conflict that rather than an abstract tension between the values
could prevent them from being applied’, but it is not clear themselves. The goals of efficiency and privacy are
how one is supposed to prevent such a conflict in practice. not fundamentally in conflict across all scenarios, for
Similarly, Cowls and Floridi (2018) recognise that using example, but do come into conflict in the context
AI for social good requires ‘resolving the tension between of certain data-driven technologies. Given the right
incorporating the benefits and mitigating the potential contextual factors, ADA-based technologies might
harms of AI’, but do not talk about specific tensions in create tensions between any two (or more) of these
detail or how to resolve them. values – or even simultaneously threaten and enhance
the same value in different ways.

4.1. Values and tensions Some of these tensions are more visible and of higher
priority than others. The table below highlights some
As highlighted in section 2, existing collections of key tensions between values that arise from current
principles invoke a number of values that can be at applications of ADA-based technologies:

33 See Friedler et al. (2016); Kleinberg et al. (2016); Chouldechova (2018); Binns (2017).

34 As per principle 2 of the Lords’ Select Committee on AI’s proposed ‘cross-sector AI code’.
20

EXAMPLES OF TENSIONS BETWEEN VALUES The tensions in the first two rows reflect how goods
offered by ADA technologies may come into conflict with
Quality of services versus privacy: using personal the societal ideals of fairness and solidarity – we therefore
data may improve public services by tailoring them refer to these tensions as societal:
based on personal characteristics or demographics,
but compromise personal privacy because of high 1. Using algorithms to make decisions and predictions
data demands. more accurate versus ensuring fair and equal treatment.

Personalisation versus solidarity: increasing 2. Reaping the benefits of increased personalisation


personalisation of services and information may in the digital sphere versus enhancing solidarity
bring economic and individual benefits, but risks and citizenship.
creating or furthering divisions and undermining
community solidarity. This first societal tension, between accuracy and fairness,
has been widely discussed in controversies and case
Convenience versus dignity: increasing automation and studies involving ADA-based technologies in recent
quantification could make lives more convenient, but years. The second tension, between personalisation
risks undermining those unquantifiable values and skills and solidarity, has received less explicit attention –
that constitute human dignity and individuality. but we believe it is also fundamental to ethical
concerns surrounding the application of ADA-based
Privacy versus transparency: the need to respect privacy technologies in society.
or intellectual property may make it difficult to provide
fully satisfying information about an algorithm or the data The next two rows concern ideals of individual life,
on which it was trained. so we refer to them as individual tensions:

Accuracy versus explainability: the most accurate 3. Using data to improve the quality and efficiency of
algorithms may be based on complex methods services versus respecting the privacy and informational
(such as deep learning), the internal logic of which autonomy of individuals.
its developers or users do not fully understand.
4. Using automation to make people’s lives more convenient
Accuracy versus fairness: an algorithm which is most versus promoting self-actualisation and dignity.
accurate on average may systematically discriminate
against a specific minority. Again, we highlight one tension that has already been widely
recognised, between the quality and efficiency of services
Satisfaction of preferences versus equality: automation
and the informational autonomy of individuals, and one that
and AI could invigorate industries and spearhead new
has been discussed less, between the convenience offered
technologies, but also exacerbate exclusion and poverty.
by automation on the one hand and the threat to self-
Efficiency versus safety and sustainability: pursuing actualisation on the other.
technological progress as quickly as possible may not
leave enough time to ensure that developments are All four tensions, however, share the following crucial
safe, robust and reliable. similarities: they arise across a wide range of sectors and
they touch upon the deepest ethical and political ideals
of modernity. Between them, they cover a broad spectrum
Given the wide scope of possible applications of of issues where further research is likely to be valuable
ADA-based technologies, and the variety of values for managing the impacts of current and foreseeable
that may be impacted (positively or negatively) by applications of ADA.
these applications, there is unlikely to be any simple,
exhaustive list of all possible tensions arising from
ADA in all contexts. It would go beyond the scope 4.2 Unpacking four central tensions
of this report to attempt to systematically map all
of these. We have therefore limited our discussion Tension 1: Using algorithms to make decisions and
to four tensions that are central to current debates, predictions more accurate versus ensuring fair
as summarised in table 1. and equal treatment.
21

TABLE 1. KEY TENSIONS ARISING BETWEEN THE GOODS OFFERED BY ADA TECHNOLOGIES AND
IMPORTANT SOCIETAL AND INDIVIDUAL VALUES

Goods offered by ADA Core values in tension with those goods


technologies
Accuracy Fairness
Societal values
Personalisation Solidarity

Quality & efficiency Informational autonomy


Individual values
Convenience Self-actualisation

This tension arises when various public or private Hypothetical illustration: to assist in decisions about
bodies base decisions on predictions about future whether to release defendants on bail or to grant parole,
behaviour of individuals (e.g. when probation officers a jurisdiction adopts an algorithm that estimates the
estimate risk of reoffending, or school boards evaluate ‘recidivism risk’ of criminal defendants, i.e. their likelihood
teachers),35 and when they employ ADA-based of re-offending. Although it is highly accurate on average,
technologies to improve their predictions. The use it systematically discriminates against black defendants,
of blunt quantitative tools for evaluating something because the ‘false positives’ – the rate of individuals
as complex as human behaviour or quality of teaching classed as high risk who did not go on to reoffend – is
can be misguided, as these algorithms can only almost twice as high for black as for white defendants.37
pick out easily measurable proxies.36 Nonetheless, Since the inner workings of the algorithm is a trade secret
these algorithms can sometimes be more accurate of the company that produced it (and in any case is too
on some measures than alternatives, especially as complex for any individual to understand), the defendants
systematic bias afflicts judgments made by humans too. have little to no recourse to challenging the verdict that
This raises questions of whether and when it is fair to have huge consequences on their lives.
make decisions affecting an individual’s life based on
an algorithm that inevitably makes generalisations,
which may be missing important information Tension 2: Reaping the benefits of increased
and which, in addition to this, can systematically personalisation in the digital sphere versus enhancing
disadvantage some groups over others. An additional solidarity and citizenship.
way in which algorithms can undermine fairness and
equality is that it is often difficult to explain why they Companies and governments can now use people’s
work – either because they are based on ‘black box’ personal data to draw inferences about their
methods, or because they use proprietary software – characteristics or preferences, which can then be used
thus taking away individuals’ ability to challenge these to tailor the messages, options and services they see.
life-altering decisions. This personalisation is the end of crude ‘one size fits all’

35 Angwin, J., et al. (2016).

36 For more on this topic see the work of, for example, Cathy O’Neil: www.bloomberg.com/view/articles/2018–06–27/here-s-how-not-to-improve-public-schools

37 Angwin, J., et al. (2016).


22

solutions and enables individuals to find the right products about loss of privacy and autonomy of individuals over their
and services for them, with large potential gains for information (we shall use the term ‘informational autonomy’
health and well-being. However, this risks threatening the to denote this value).39
guiding ideals of democracy and the welfare state, namely
citizenship and solidarity.38 These ideals invite us to think of
Hypothetical illustration: a cash-strapped public
ourselves as citizens and not just individual consumers, and
hospital gives a private company access to patient data
to provide for each other in the face of unexpected blows
(scans, behaviours, and medical history) in exchange
of fate beyond individual control. Public commitments that
for implementing a machine learning algorithm that
certain goods should be ensured for citizens irrespective
vastly improves doctors’ ability to diagnose dangerous
of their ability to pay (education, healthcare, security,
conditions quickly and safely. The algorithm will only be
housing, basic sustenance, public information) depend on
successful if the data is plentiful and transferable, which
there being a genuine uncertainty about which ones of us
makes it hard to predict how the data will be used in
will fall ill, lose employment, or suffer in other ways. This
advance, and hard to guarantee privacy and to ensure
uncertainty underpins commitments to risk-pooling and
meaningful consent for patients.
without it there is an increased tension between promoting
individual benefit and collective goods.
Tension 4: Using automation to make people’s lives more
convenient versus promoting self-actualisation and dignity.
Hypothetical illustration: a company markets a new
personalised insurance scheme, using an algorithm trained
Many ADA-based technologies are currently developed
on rich datasets that can differentiate between people
by private commercial entities working to disrupt existing
in ways that are so fine-grained as to forecast effectively
practices and replace them with more efficient solutions
their future medical, educational, and care needs. The
convenient to as many customers as possible. These
company is thus able to offer fully individualised treatment,
solutions may genuinely improve people’s lives by saving
better suited to personal needs and preferences. The
them time on mundane tasks that could be better spent
success of this scheme leads to the weakening of publicly
on more rewarding activities, and by empowering those
funded services because the advantaged individuals
previously excluded from many activities. But automated
no longer see reasons to support the ones with
solutions also risk disrupting an important part of what
greater needs.
makes us human.40 Literature and arts have long explored
anxieties about humans relying on technology so much
Tension 3: Using data to improve the quality and efficiency that they lose their creative, intellectual, and emotional
of services versus respecting the privacy and informational capacities.41 These capacities are essential to individuals’
autonomy of individuals. ability to realise their life plans autonomously and
thoughtfully – an ideal that is often referred to as self-
This tension arises when machine learning and big data actualisation and dignity. The fast rise of ever more effective
are used to improve a range of different services: public and comprehensive AI systems makes the possibility of
ones such as healthcare, education, social care, and policing, human decline and obsolescence – and associated fears
or any service offered privately. These technologies could of deskilling, atrophy, homogenisation, and loss of cultural
enable service providers to tailor services exactly to diversity – more vivid and realistic. These fears also arise in
customers’ needs, improving both quality of services as displacement of human labour and employment by AI and
well as efficient use of taxpayers’ money. However, the robots because, in addition to livelihood, work is a source
heavy demand on individuals’ personal data raises concerns of meaning and identity.

38 See Prainsack and Buyx (2017) on personalisation and solidarity in healthcare and biomedicine.

39 A recent example of this tension is the case of DeepMind and the Royal Free hospital: www.theguardian.com/technology/2017/jul/03/
google-deepmind-16m-patient-royal-free-deal-data-protection-act

40 Turkle (2016 and 2017) explores these trends in depth.

41 E.M. Forster’s dystopian short story ‘The Machine Stops’ (1909, see Forster 1947) and the 2008 animated film Wall-E illustrate this concern.
23

Hypothetical illustration: AI makes possible an all- –– A technology which benefits the majority may
purpose automated personal assistant that can translate systematically discriminate against a minority:
between languages, find the answer to any scientific predictive algorithms in a healthcare setting may
question in moments, and produce artwork or literature improve outcomes overall, but worsen outcomes
for the users’ pleasure, among other things. Its users gain for a minority group for whom representative data
unprecedented access to the fruits of human civilization is not easily accessible, for example.
but they no longer need to acquire and refine these skills
through regular practice and experimentation. These –– Automation may enrich the lives of the most
practices progressively become homogenised and ossified privileged, liberating them to do more worthwhile
and their past diversity is now represented by a set menu pursuits, while wiping out the livelihood of those
of options ranked by convenience and popularity. whose labour is replaced and who do not have other
options. In addition to shifts in distribution of material
resources, prestige, power, and political influence are
also affected.
4.3 Identifying further tensions
• Short term versus long term. Tensions can arise
The four tensions outlined above are central to thinking because values or opportunities that can be enhanced
about the ethical and societal implications of ADA broadly by ADA-based technologies in the short term may
and as they stand today. However, other tensions can compromise other values in the long term. For example:
and should be identified, particularly when focusing more
narrowly on specific aspects of ADA ethics, and as the –– Technology which makes our lives better and
impacts of technology on society change over time. more convenient in the short term could have
hard-to-predict impacts on societal values in the
Our approach to identifying tensions begins with a list long term: as outlined above, for example, increasing
of important values and principles we want our use of personalisation could make our lives easier and
ADA-based technologies to respect. We then consider more convenient, but might undermine autonomy,
what obstacles might arise to realising these values in equality and solidarity in the long term.
practice, and ways that using technology to enhance
or promote one value might undermine another. –– Speeding up innovation could create greater benefits
for those alive today, while introducing greater risks
This approach could usefully be extended or repeated in the long term: there is a trade-off between getting
by others to identify additional tensions to those outlined the benefits of AI as quickly as possible, and taking
above, as different perspectives will inevitably unearth slightly extreme caution with the safety and robustness of
different tensions. Appendix 3 presents some different ways advanced systems.
of carving up issues, publics, and sectors, which could be
used to help identify a variety of tensions. Repeating this • Local versus global. Tensions may arise when applications
process over time will also be important, as the ways that that are defensible from a narrow or individualistic view
technology may be used to enhance or threaten key values produce negative externalities, exacerbating existing
changes, and even the very values we prioritise as a society collective action problems or creating new ones.
may change. For example:

Thinking about tensions could also be enhanced by –– Technology that is optimised to meet individual needs
systematically considering different ways that tensions might create unforeseen risks on a collective level:
are likely to arise. We outline some conceptual lenses a healthcare algorithm might recommend against
that serve this purpose: vaccination for individuals, which could have huge
negative impacts on global health.
• Winners versus losers. Tensions sometimes arise
because the costs and benefits of ADA-based
technologies are unequally distributed across
different groups and communities.
24

4.4 Resolving the tensions are beginning to be developed which increase transparency
without compromising accuracy.42
So far we have been using the single term ‘tension’ to
denote what is in fact several different kinds of conflicts This in turn highlights that some apparent tensions
between values: some fundamental, others merely practical. may in fact be false dilemmas. These are situations
Because these differences matter to how these tensions where there exists a third set of options beyond
should be resolved, we spell them out here before having to choose between two important values. We
discussing the solutions. can commit more time and resources to developing
a solution which avoids having to sacrifice either value,
4.4.1 Kinds of tensions or to delay implementing a new technology until
The quintessential ethical conflict is a true dilemma. further research makes available better technologies.
A true dilemma is a conflict between two or more duties, False dilemmas can arise when we fail to recognise
obligations, or values, both of which an agent would either the extent to which our current technological
ordinarily have reason to pursue but cannot. These are capabilities are in fact able to resolve a tension, or there
instances when no genuine resolution is possible because are no overriding constraints that force us to implement
the very acts that further one value (say, Antigone’s duty a given technology immediately.
to bury her dead brother) take away from the other value
(her duty to obey the king). We call these true dilemmas The best approach to resolving a tension will depend
because the conflict is inherent in the very nature of the on the nature of the tension in question.
values in question and hence cannot be avoided by clever
practical solutions. Sometimes the tensions we discussed 4.4.2 Trade-offs and true dilemmas
above will take the form of such a dilemma in which it is To the extent that we face a true dilemma between two
genuinely impossible to, say, implement a new automating values, any solution will require making trade-offs between
technology without devaluing and undermining certain those values: choosing to prioritise one value at the expense
human skills and capacities. In true dilemmas, a choice of another. For example, if we determined that each of
has to be made to prioritise one set of values, say speed, the tensions presented above could not be dissolved by
efficiency and convenience, over another, say achievement practical means, we would need to consider trade-offs
or privacy. such as the following:

However, sometimes what appears like a tough choice • Trade-off 1: Judging when it is acceptable to use an
necessitating the sacrifice of important values is not algorithm that performs worse for a specific subgroup,
one in reality. Claims of dilemmas can be exaggerated if that algorithm is more accurate on average across
or go unexamined, such as when a company claims a population.
that privacy needs to be sacrificed without properly
studying how their goals might be achieved without this • Trade-off 2: Judging how much we should restrict
sacrifice. In many cases the tension we face is a dilemma personalisation of advertising and public services for
in practice, where the tension exists not inherently, the sake of preserving ideals of citizenship and solidarity.
but due to our current technological capabilities and
constraints, including the time and resources we have • Trade-off 3: Judging what risks to privacy it is acceptable
available for finding a solution. The tension between to incur for the sake of better disease screening or
transparency and accuracy is a useful illustration. These greater public health.
two ideals are not fundamentally in conflict with one
another (in the same way that some of the conflicting • Trade-off 4: Judging what kinds of skills should always
definitions of fairness are, for example.) The conflict here remain in human hands, and therefore where to reject
is a more practical one: generally, producing the most innovative automation technologies.
accurate algorithm possible will tend to result in models
that are more complex and therefore more difficult to The difficult question is how such trade-off judgments
make fully intelligible to humans. However, it is an open should be made. In business and economics, solutions
empirical question to what extent we must be forced to to trade-offs are traditionally derived using cost-benefit
make a trade-off between these two ideals, and methods analysis (CBA): where all the costs and benefits of a

42 See, for example, Adel et al. (2018).


25

given policy are converted to units on the same scale 2. To identify acceptable and legitimate trade-offs that are
(be it monetary or some other utility scale such as well- compatible with rights and entitlements of those affected
being) and a recommendation is made on the basis of by these technologies.
whether the benefits outweigh the costs. These methods
are used almost universally in governance, industry, and 3. To arrive at resolutions that, even when imperfect, are
commerce because they provide clear procedures at least publicly defensible.
and appear objective. It will be tempting to transfer
these same methods to the dilemmas above, demanding Faced with tragic choices between different ideals of
data on how much value all stakeholders put on each virtue and good life, such an approach accepts that human
of the ideals involved in any given dilemma and crunching judgment, protest, contestation, and consensus-building are
the numbers thereafter. all unavoidable and no technocratic process can replace it.46
We talk more about the process of public deliberation in
We caution against this. Cost-benefit analysis can be section 5.2.
part of the process of exploring trade-offs. The process
is transparent and mechanical and can generate useful 4.4.3 Dilemmas in practice
data to input into decision-making. But CBA alone should On the other hand, to the extent that we face a dilemma
not be seen as the answer: it is technocratic, it does not in practice, we lack the knowledge or tools to advance the
recognise the fact that values are vague and unquantifiable conflicting values without sacrificing one or the other. In this
and that numbers themselves can hide controversial case, trade-offs may or may not be inevitable, depending on
value judgments, and finally, the very act of economic how quickly and with what resources we need to implement
valuation of a good can change people’s attitude to a policy or a technology. Data-driven methods for improving
it (this explains why applying CBA to environmental the efficiency of public services and securing high levels
or other complex and public goods attracts so of informational privacy may be possible in principle, for
much controversy).43 example, but not available at the moment. For each of
the four tensions highlighted, it is possible that with more
Resolution of these dilemmas can take a variety of forms knowledge or better methods the tension would dissolve
depending on precise political arrangements. But one or at least be alleviated.
approach we wish to highlight (and one that is also relevant
to the cases discussed in section 3), is that legitimacy of In these situations we face a choice:
any emerging solution can be achieved through consultation
and inclusive public deliberation. Methods for implementing • To put the technology to use in its current state. In this
such deliberations, where small groups of citizens are case, we will need to determine and implement some
guided through controversies by experts and moderators, legitimate trade-off that sacrifices one value for another.
are emerging in political science and in environmental This will involve the same kind of work as described for
and medical research where public participation matters.44 true dilemmas in section 4.4.2.
In case of ADA-based technologies, such consultations
are not yet well established but they are much needed.45 • To hold off implementing this technology and instead
Their goals should be as follows: invest in research on how to make it serve all the values
we endorse equally and maximally.
1. To give voice to all stakeholders and to articulate their
interests with rigour and respect (data about potential This choice can be thought of as involving its own tension,
costs and benefits of technologies can be useful for this). of the short-term versus long-term kind discussed in

43 For controversies surrounding cost benefit analysis see Frank (2000), Alexandrova and Haybron (2011), Haybron and Alexandrova (2013). For complexities
of identifying and measuring what counts as benefit and well-being see Alexandrova (2017).

44 Stanford’s Centre for Deliberative Democracy is a pioneer http://cdd.stanford.edu/what-is-deliberative-polling/

45 Though some work on consultation relating to ADA has begun, led particularly by the RSA and Royal Society, as we will discuss in section 5.2.

46 See Moore (2017) and Alexandrova (2018) among others on the crucial role of consultation and contestation of expertise in democracies.
26

section 4.3: to what extent should we postpone the Answering these questions will in part involve conceptual
benefits of new technologies and instead invest the time research of the kind discussed in section 3. For instance,
and resources necessary to better resolve these tensions? clarifying what kind of algorithmic ‘fairness’ is most important
This is not a binary choice, of course. Rather, we might is an important first step towards deciding whether this is
choose to strike a balance: try to navigate the trade-offs achievable by technical means. In addition, since these are
required to make decisions about how to use technology largely empirical questions about what is in fact possible,
today, while investing in research to explore whether the answering them will often require drawing on evidence
tension might be fully resolvable in future. about what is technically feasible, as described in detail
in the next section. In some cases, current or potential
4.4.4 Better understanding tensions technology may be able to resolve or lessen some of
As this discussion highlights, in order to make progress on the tensions.
the tensions arising in relation to ADA-based technologies,
it is crucial to be clear about the nature of the tensions – Finally, these tensions will only be resolved in practice
do they involve true dilemmas, practical dilemma or even if there are sound and appropriate institutions, laws, and
false dilemmas? 47 governance structures to undergird and implement these
efforts. Standards, regulations, and systems of oversight
In addition to developing methods for balancing trade-offs concerning the ADA technologies are currently in flux
and investing in better technology, we should therefore and much uncertainty surrounds their future.48 We urge
also invest in research to better understand the nature that new approaches to governance and regulation be
of important tensions. We can explore this by asking the duly sensitive to the tensions described above and devise
following questions: legitimate institutions that will help communities to
navigate whatever tensions arise and at whatever levels.
• Can the most accurate predictive algorithms be used in
a way that respects fairness and equality? Where specific
predictive algorithms are currently used (e.g. in healthcare, 4.5 Summary and recommendations
crime, employment), to what extent do they discriminate
against or disadvantage specific minorities? This section introduces the idea and the importance
of thinking about tensions between values that different
• Can the benefits of personalisation be reaped without principles of ADA ethics embody, in order to ensure that
undermining citizenship and solidarity? In what specific these principles can be action-guiding in concrete cases. The
ways might different forms of personalisation undermine following high-level recommendation follows immediately:
these important ideals in future? How can this be
addressed or prevented? • Move the focus of ADA ethics towards identifying the
tensions arising from implementation of ethical practice
• Can personal data be used to improve the quality involving ADA.
and efficiency of public services without compromising
informational autonomy? To what extent do current The four tensions we propose as priorities in
methods allow the use of personal data in aggregate for section 4.2 encompass controversies and case studies
overall social benefits, while protecting the privacy of that commentators across different fields and sectors
individuals’ data? are beginning to explore. Hence, our next set of
recommendations are to:
• Can automation make lives more convenient without
threatening self-actualisation? Can we draw a clear line • Investigate instances of the four tensions highlighted in
between contexts where automation will be beneficial this report across different sectors of society, exploring
or minimally harmful and tasks or abilities should not specific cases where these tensions arise:
be automated?

47 Concrete cases may well involve a combination of all three kinds of dilemmas, once we distinguish at a more fine-grained level between the different values held
by different stakeholders in a given case.

48 See Wachter and Mittelstadt (2018) for the uncertainty surrounding the implications and implementation of the GDPR, for example.
27

–– Using algorithms to make decisions and


predictions more accurate versus ensuring
fair and equal treatment.

–– Reaping the benefits of increased personalisation


in the digital sphere versus enhancing solidarity
and citizenship.

–– Using data to improve the quality and efficiency


of services versus respecting the privacy and
informational autonomy of individuals.

–– Using automation to make people’s lives more


convenient versus promoting self-actualisation
and dignity.

• Identify further tensions based on other value conflicts


and their underlying causes using the following questions:

–– Where might the costs and benefits of ADA-based


technologies be distributed unequally across groups?

–– Where might short-term benefits come at the cost


of longer-term values?

–– Where might ADA-based technologies benefit


the individual or groups but raise problems at
a collective level?

Articulating the tensions that apply in a given case is the first


step in implementing ethical technologies, but the next step
should be towards resolving these conflicts. How we do so
depends on the nature of any given tension. We therefore
recommend that further research should aim to:

• Identify the extent to which key tensions involve true


dilemmas, dilemmas in practice or false dilemmas.
Often this will involve investigating specific instances
of the tension, and considering ways to resolve it without
sacrificing either of the key values.

• Where we face dilemmas in practice, conduct research


into how these dilemmas might be dissolved, for
example by advancing the frontiers of what is technically
possible such that we can get more of both the values
we care about.

• Where we face true dilemmas between values, or


practical dilemmas that we are forced to act on now,
conduct research into dilemma resolution through
legitimation of trade-offs in public deliberations
and regulatory institutions adapted specially to
ADA technologies.
28

5. Developing an evidence base

Current discussions of the ethical and societal implications be needed. In general, a plurality of disciplinary perspectives
of ADA suffer from gaps in our understanding: of what and innovative methodological thinking is likely to provide
is technologically possible, of how different technologies the best possible evidence base.
will impact society, and what different parts of society
want and need. To make progress in using ADA-based
technologies for the good of society, we need to build 5.1 Understanding technological capabilities
a stronger evidence base in all of these areas. Building and impacts
this stronger evidence base will be particularly important
for those developing practical frameworks and guidelines 5.1.1 Technological capabilities – what is possible?
for AI ethics, including government bodies, legislators, Understanding technological capabilities is a vital foundation
and standards-setting bodies. for understanding what the real risks and opportunities
of different technologies are. For example, in order to
For example, the tension between using data to improve assess where data-based targeting may pose the greatest
public services and the need to protect personal privacy opportunities, and what risks it might introduce, we need
is difficult in part because discussions of this topic are to understand what technical steps are involved in collecting
lacking good evidence on the following: and using personalised data to target an intervention, and
the limitations of existing approaches.49 In order to assess
• How much machine learning and ‘big data’ could the threat of technological unemployment and design
improve public services – and to what extent and in effective policies to tackle it, we need to understand
what ways personal privacy might be compromised on what kinds of tasks machines are currently able to
by doing so. outperform humans, and the ways we might expect this
to change over coming years.
• To what extent different publics value better healthcare
relative to data privacy, and in what contexts they are Understanding technological capabilities helps us to
happy for their data to be used. think more clearly about the ethical tensions described
in section 4 in several ways: by showing whether these
• What the longer-run consequences of increased use tensions are true dilemmas or dilemmas in practice, by
of personal data by authorities might be. helping us to estimate the specific costs and benefits
of a technology in a given context, and by giving grounds
Ensuring that algorithms, data and AI are used to benefit for plausible trade-offs between values that a technology
society is not a one-off task but an ongoing process. promotes or threatens. This kind of evidence will be
This means that, as well as understanding technological crucial for policymakers and regulators working on
capabilities and societal needs as they stand today, we also the governance of AI-based technologies, as well as
need to think about how these things might evolve in the helping researchers to identify gaps and priorities
future so that we can develop adaptive strategies that take for future research.
future uncertainty into account.
We also need research focused on forecasting future
This section outlines some of the general areas of capabilities, not just measuring existing ones, so we can
research that will be needed to develop a stronger anticipate and adapt to new challenges.
evidence base, and highlights some priority questions
based on the tensions discussed in section 4. Our focus For the four central tensions, some key questions that
is on what kinds of questions need answering and the will need to be answered include:
general directions of research. While we highlight some
promising methods, these are not meant to be exhaustive. • Accuracy versus fair and equal treatment
We have not attempted to survey all existing or possible
methods for studying these questions and for some –– To what degree does accuracy trade off against
questions, new and innovative research strategies may different definitions of fairness?

49 It is not clear that many of the claims about Cambridge Analytica’s use of ‘psychographic microtargeting’ stand up to rigorous technical scrutiny, for example:
see Resnick (2018).
29

–– What forms of interpretability are desirable and can The questions are phrased at a generic level. To help resolve
be ensured in state-of-the-art models? tensions in practice, such questions will need to be tailored
to the specific problem domain, as illustrated in the
–– To what extent is it possible to ensure adequate following scenario:
interpretability without sacrificing accuracy
(or other values, such as privacy)?
Hypothetical scenario: Imagine the Department for
Health and Social Care (DHSC) is developing guidelines
• Personalisation versus solidarity and citizenship
on the level of interpretability that should be required for
algorithms to be used in different healthcare applications,
–– Are there any in-principle or in-practice limits
and how to balance this against potential costs to
to how fine-grained personalisation can become
accuracy. To do this well they need to understand both:
(using current or foreseeable technology)?
• What the options are for a given application. What
–– To what extent is personalisation able to different models could be used to analyse radiological
affect relevant outcomes in a meaningful way imaging, for example, and to what extent and in
(e.g. user satisfaction, consumer behaviour, what ways is each interpretable, and at what cost
voting patterns)? to accuracy?

• Quality and efficiency of services versus privacy • The various costs and benefits in a given context.
and informational autonomy In some cases, a drop in accuracy might be much
more costly than in others, for example if incorrect
–– By how much could machine learning and ‘big data’ diagnoses could threaten lives. And the importance
improve different public services? Can potential of different forms of interpretability will also vary by
gains be quantified? situation (depending on whether there are other
ways to test the reliability of an algorithm, or whether
–– What are the best current methods for securing decisions frequently need to be explained to patients,
data privacy, and what are the technical constraints? for example).
Without understanding these technical details, the DHSC
• Convenience versus self-actualisation and dignity
risks producing highly general guidelines that are at best
difficult or impossible to implement in practice, and at
–– What types of tasks can feasibly be automated
worst harmful (advising never using a model that cannot
using current or foreseeable technologies?
be fully explained might prevent some clearly beneficial
applications, for example).
–– What would the costs (e.g. energy and infrastructure
requirements) be for widespread automation of
a given task? These questions in part concern the technology itself, and
so involve what is possible from the perspective of computer
• In addition, there are overarching questions to science, machine learning and other technical fields of
be investigated, which touch upon all four of these research. Many of these are rapidly advancing fields, making
tensions and could be applied to others: it critical to stay aware of current technical constraints and
developments. One way to collect evidence on technological
–– What do we need to understand about technological capabilities would be to talk to or survey experts in
capabilities and limitations in order to assess the risks relevant domains.50 As a single researcher’s opinions
and opportunities they pose in different ethical and on what is ‘state-of-the-art’ might not be representative,
societal contexts? surveying a wide range of technical experts is preferable
to just asking one or two for their opinion. A key challenge
–– How might advances in technological capabilities here will be to describe the technical state of research
help resolve tensions between values in applications with sufficient accuracy and detail in ways that are useful
of ADA, and what are the limitations of technology and comprehensible to non-technical people working
to do so? in ethics and policy.

50 See, for example, Grace et al. (2018).


30

Figure 3. An illustration of how cycles and injustice can be reinforced in how technology is developed, applied,
and understood by members of society.

• Hostile workplace • Biased datasets


• Discriminatory hiring • Algorithmic bias
• Leaky pipeline • Biased design

Industry ADA technology

Narratives Society

Perpetuating sterotypes:
• about ADA: e.g. white male developers Unequal distribution of risks,
• via ADA: e.g. female digital assistants benefits and opportunities

© AI Narratives and Justice Research Programme, Leverhulme Centre for the Future of Intelligence, 2018

However, these questions go beyond the technology itself; 5.1.2 Current uses and impacts – what is happening?
they also involve the effects and impacts these technologies In addition to understanding what is technologically possible,
have on humans. To answer them fully will also require there is also a need to better understand: (1) how different
research of a more psychological or sociological nature. technologies are being used, and what kinds of impacts these
The field of human-computer interaction studies are in fact having, and (2) what kinds of causes, mechanisms
many questions regarding the impacts of ADA-based or other influences underlie these impacts.
technology on humans, often using psychology and
social science methodologies. Regarding (1), at the moment, many debates about the
ethics of ADA are motivated either by case studies, such
Finally, some of these questions ask not just about as those uncovered by investigative journalists and social
current capabilities of technology, but also how these commentators, or hypothetical scenarios about how
could evolve in future. Excellent work on measuring technologies might be used. While these are crucial to
and forecasting technological capabilities already exists.51 highlighting the potential ethical and societal impacts
However, research on the ethical and societal challenges of ADA technologies, it is unclear to what extent these
of those technologies could do much more to draw are representative of current or future developments.
on and build on this work, to help ensure that our There is a risk of over-estimating the frequency of some
understanding of these broader challenges starts from applications and impacts, while missing others.
rigorous thinking about what is – and what could be –
technically possible. One important type of research would be to map and
quantify how different ADA technologies are used on
Research into technological capabilities and impacts will a sector-by-sector basis, looking at how they are used
therefore likely require collaborations between experts in finance, energy, health care, etc.52 Another would be
from technical ADA research, psychology/social science, to identify the extent to which the kinds of positive or
forecasting, policy and ethics, as well as people able to negative impacts often discussed actually occur in practice
translate between these different fields. in different sectors. A potential challenge that will need

51 See, for example, the AI Index, and the Electronic Frontier Foundation’s work on AI Progress Measurement.

52 See appendix 3 for further ways to break down the space of ethical and societal impacts of ADA.
31

to be addressed is the extent to which private or public • How easy are these algorithms to interpret, and what
entities are willing to disclose this information. recourse do individuals have for challenging decisions?

Regarding (2), understanding how potential impacts come Personalisation versus solidarity and citizenship
about is crucial to determining the kinds of interventions • What kinds of messages, interventions and services
that can best mitigate them, as explained in the case are already being personalised using machine learning,
study below. and in what sectors?

• How ‘fine-grained’ is this personalisation, and on


CASE STUDY
what kinds of categories is it based?
Cycles of injustice: Race and gender
• What evidence is there that this personalisation
Tackling algorithmic bias and discrimination requires can substantially affect attitudes or behaviour?
better understanding of how they fit into a broader
cycle of injustice, in which different problems reinforce Quality and efficiency of services versus privacy
each other, as illustrated in figure 3. For instance, and informational autonomy
discriminatory or biased outputs from the AI industry • In what sectors and applications are ADA being
are caused both by a lack of diversity among researchers used to improve the efficiency of public services?
and developers, and by pre-existing social biases that
are reflected in many data-sets (e.g. gender-stereotypical • What impacts are these specific applications having
correlations between words in linguistic corpora). on autonomy and privacy?
The deployment of these biased systems leads to the
exacerbation of existing social injustices (e.g. systems Convenience versus self-actualisation and dignity
advising on which prisoners get parole that use racially • What tasks and jobs have been automated in recent
biased historical data and result in people of colour years, and what might we expect to be automated
staying in prison longer). in the near future?

These injustices affect who is able to shape the • What effects is automation already having on people’s
narratives surrounding the technology, which in turn daily lives?
impacts on both who is able to enter the industry and
the original social injustices. For example, creators of In addition, there are overarching questions to be
AI are invariably portrayed as men, potentially affecting investigated, which touch upon all four of these tensions
both whether women are motivated to apply and and could be applied to others:
whether they are hired, and digital assistants are invariably
framed as female, perpetuating the view that women • Across different sectors (energy, health, law, etc.), what
are subservient. Understanding these interrelations kinds of ADA-based technologies are already being used,
is key to determining how best to address the and to what extent?
resulting problems.
• What are the societal impacts of these specific
applications, in particular on those that might be
To better understand the ways in which different disadvantaged, or underrepresented in relevant sectors
technologies are being used, their impacts on society, and (such as women and people of colour) or vulnerable
the mechanisms underlying these impacts, we can ask the (such as children or older people)?
following questions in relation to our four central tensions:

Accuracy versus fair and equal treatment 5.2 Understanding the needs and values
• In what sectors and applications are ADA being used of affected communities
to inform decisions with implications for people’s lives?
In order to make progress on the ethical and societal
• Is it possible to determine how often these result in implications of ADA technologies, it is necessary to
differential treatment of different socially salient groups? understand the perspectives of those who are or will be
32

affected by those technologies. In particular, negotiating Public engagement that aims to resolve trade-offs can take
trade-offs between values can only happen when these the following forms:
values, and the related hopes and concerns, of everyone
who is going to be impacted by these technologies are • Quantitative surveys. Such surveys are frequently
identified and considered. Identifying these perspectives employed for ‘understanding public understanding’,
requires consultation with these end users, or at least i.e. to understand how much the surveyed groups
demographically representative groups of members of already know about a topic, and how this informs
different publics.53 their opinions and attitudes towards a technology.

It must be noted that fostering public understanding of • Collaborative online consultation. One example is
technology alone is far from sufficient. Indeed, some science the recent consultation put out by the UK’s Centre
communication experts argue that it often does not matter for Data Ethics and Innovation.58 Using trade-offs
whether non-scientists know very little about science:54 A full and conjoint analysis, possibly in gamified form, this
understanding of how the technology works is not necessary could capture the views of many thousands of citizens
for end users to understand its impact on their lives. Public and obtain wide-ranging top-of-mind views on how
engagement, which includes public deliberation, polling, AI might play out in society and how people respond
and dialogues, is much more important: that is, fostering to different ethical decisions.
mutual understanding between researchers, developers,
policymakers, and end users. It involves mutual interaction • Qualitative surveys and interviews. When used to
between these groups aimed at understanding not only the complement quantitative work, this application of
science and technology, but also its societal impacts, limits, qualitative methods is particularly useful for exploring
trade-offs, and pitfalls. people’s existing motivations and the meanings
they attach to their interactions with a technology.
For present purposes, public engagement is crucial for These methods can also be deployed in combination
resolving trade-offs and dilemmas in a way that is defensible with an educational element, in order to gather
to all members of society, especially for trade-offs that arise informed views.
because there are conflicts between the interests of different
groups. On any given issue, citizens will rarely all share the • Public dialogue with scenario planning. This typically
same values and perspectives. However, there is evidence involves the input of a group of experts, including
that when groups are able to reflect on and articulate forecasters and technical analysts, who systematically
what they care about, it is possible to reduce conflict and map out the key uncertainties within a specified time
reach compromise.55 It is important to note, however, that frame. The task for the public then becomes easier –
while understanding relevant public values is important to rather than having to engage in the abstract with the risks
resolving trade-offs, it is not in itself the solution, but only and benefits of different aspects of complex technologies,
one part of a more complex political process.56 they simply have to react to different extrapolated
outcomes and talk about how individuals and society
There is a wide range of methods available to foster would fare in different possible scenarios.
such engagement.57 These methods can be deployed
to elicit a range of views, from uninformed to informed. • Citizen fora. The RSA emphasises citizen fora as
While uninformed polling aims to gather opinions that a particularly important form of public dialogue.
the surveyed groups currently hold, informed views can These are not just a one-way process of gaining
be elicited through engagement strategies that aim first information from the public, but focus on an iterative
to increase the knowledge base of the surveyed groups dialogue where expert stakeholders and citizens work
before investigating their informed opinions. together to produce recommendations for policymakers.

53 We are grateful to Sarah Castell (Ipsos Mori) for valuable input on this section.

54 Hallman in Jamieson et al (2017).

55 Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA, 2018).

56 Which might also involve cost-benefit analysis, drawing on expert perspectives, and evidence on the concrete impacts of technology on society.

57 International Association for Public Participation. www.dvrpc.org/GetInvolved/PublicParticipation/pdf/IAP2_public_participationToolbox.pdf

58 www.gov.uk/government/consultations/consultation-on-the-centre-for-data-ethics-and-innovation
33

This is often used where a problem requires navigating • How do these attitudes differ depending on exactly
trade-offs and considering multiple different plausible what data is being used, who is making use of it, and
solutions. This form of public dialogue is particularly well- for what purpose?
suited to exploring and resolving some of the trade-offs
we discussed in section 4. • How do these attitudes differ between groups?

In all these forms of public engagement, the resulting views Convenience versus self-actualisation and dignity
represent only a snapshot taken at a single moment in time: • How do people experience loss of different jobs
it will be important also to keep track of how values and or tasks to automation?
perspectives change over time. Over the next few years,
for example, concerns around data privacy might grow • How do answers to this question differ by
stronger – or they might dissipate entirely. demographic factors?

We outline existing public engagement work in more • In the light of increasing automation, what would
detail as part of the literature review in appendix 1, people’s ideal working patterns be?
section 4. Based on the work that has already been
done, we can identify examples of specific questions • How would people like to interact with ADA
for public engagement around the four central tensions technologies in the workplace? Which tasks would
(although in order to explore in-depth attitudes to any they prefer to be taken over by these technologies?
given technology, many more questions will be relevant).
In addition, there are several overarching questions to be
Accuracy versus fair and equal treatment investigated, which touch upon all four of these tensions
• How do individuals experience situations when and could be applied to others:
major decisions about them are being taken with
the aid of ADA technology? • Why, and to what extent, is it important for publics to
understand a given technology (including its mechanisms,
• Under what circumstances are people willing to purposes, owners and creators, etc.)?
accept differential effectiveness of a technology
for different groups? • If algorithms are being used as part of making decisions
that significantly impact people’s lives, what kinds of
• What do people consider to be ‘fair and equal explanations of its decisions are needed and appropriate?
treatment’ in different contexts? Does this differ depending on the type of decision, or
who is ultimately in charge of it?
Personalisation versus solidarity and citizenship
• In what contexts do people seek out or endorse • What do the public see as the biggest opportunities
individualised information or options specifically and risks of different technologies, and how do they
tailored to a certain ‘profile’ they fit? think about trade-offs between the two? How does
this differ based on demographic factors? How does
• How does this change depending on the level this differ based on people’s personal experience with
of personal benefit? different technologies?

• How does it change depending on the field


(e.g. health, entertainment, political advertising)? 5.3 Applying evidence to resolve tensions

• How do people experience changes in the public Having highlighted some specific examples of questions
sphere due to automation? in each of the previous subsections, we now pull all of this
together to highlight how a stronger evidence base can
Quality and efficiency of services versus privacy help unpack and resolve our four central tensions.
and informational autonomy
• When do people endorse the use of their personal Accuracy versus fair and equal treatment
data to make services more efficient? This tension arises when users embed an algorithm as
part of a decision-making process, but there are trade-
offs between the benefits an algorithm brings (increased
34

accuracy, for example), and its potential costs (potential society, or are they more worried about whether the
biases which may lead to unfair outcomes). To make ability to tailor information gives organisations too much
progress here, we need not only to understand the power to manipulate individuals? As with privacy, we might
strengths and limitations of a given algorithm in a specific expect attitudes around personalisation and solidarity to
context, but also to compare them to the relative strengths change over time: we need to consider what these changes
and limitations of human decision-makers. More research might be, and how they might change the tensions that arise.
comparing the predictive accuracy and biases of algorithms Scholarship on the wider social and political implications
compared to those of humans in different contexts would of personalisation for democracy, the welfare state, and
make it clearer when accuracy and fair treatment are really political engagement are also essential.
in conflict, and make it easier to decide when using an
algorithm is appropriate. Quality and efficiency of services versus privacy
and informational autonomy
Understanding different societal perspectives will also be As mentioned, this tension arises because personal
a crucial part of navigating the trade-offs that arise when data may be used to improve public services, but doing
we use algorithms in decision processes. Does automation so raises challenges for privacy and autonomy of individuals
of life-changing decisions reduce or strengthen trust in over their information. However, technical methods
public institutions? What level and types of explainability exist for drawing inferences from aggregate data while
do different groups need to trust algorithms that impact protecting the privacy of individual subjects, such as
their lives? What kinds of information and characteristics is differential privacy.60 The more successful these methods
it acceptable for an algorithm to use in making different types are, and the more we are able to implement new models
of decisions, and what kinds might be considered unfair?59 of consent, the less there is a tension between innovative
uses of data and privacy. Understanding the current status
Personalisation versus citizenship and solidarity of technical research in this area, and what it can and
Here a tension arises because data and machine learning cannot do, will therefore be important for understanding
can be used to personalise services and information, with this tension.
both positive and negative implications for the common
good of democracies. In order to understand the trade-offs Where a trade-off between quality of service and privacy
here, however, we need better evidence than the current, remains, understanding public opinion, both uninformed
often sensationalist, headlines on what is currently technically and informed, will be essential for resolving it. It might be
possible: what kinds of inferences about groups and that publics endorse the use of their personal data in some
individuals can be drawn from publicly or privately available cases – lifesaving medical applications, say – but not in others.
data? What evidence is there that using these conclusions Notions of privacy and its importance may also evolve
to target information and services is more effective and over time, changing what aspects of it become more or less
with respect to what purposes? What kinds of influence central. Expert judgment about the broader social and legal
on attitudes might this make possible? implications of privacy violations and enhancement should
supplement these studies.
We also need to collect better evidence on the attitudes
towards increasing personalisation: where do people Convenience versus self-actualisation and dignity
see this as benefiting their lives, where is it harmful, and At the heart of this tension lies the fact that automation
is it possible to draw a clear line between the two? has clear benefits: saving people time and effort spent
Personalisation is sometimes welcome and sometimes on mundane tasks, increased convenience and access,
‘creepy’ and we need to know when and why. To the but too much automation could threaten our sense of
extent that people reject personalisation in a given achievement, self-actualisation, and dignity as humans.
domain, which concerns underlie this attitude – for To explore this tension we therefore need to start with
example are they concerned about no longer sharing clearer thinking about where automation is seen to be
the same informational sphere as other members of largely beneficial (perhaps because the tasks in question

59 Building on work such as Grgić-Hlaĉa et al. (2018), who study human perceptions of fairness in algorithmic decision-making in the context of criminal risk
prediction, proposing a framework to understand why people perceive certain features as fair or unfair in algorithmic decisions.

60 Differential privacy methods aim to maximise the accuracy of inferences drawn from a database while minimising the chance of identifying individual records,
by ensuring that the addition or removal of a single datapoint does not substantially change the outcome. Though differential privacy is not an absolute guarantee
of privacy, it ensures that the risk to an individual of having their data part of a database is limited. For reviews, see for example, Hilton and Dwork (2008).
35

are mindless and alienating), and where it is threatening • Building a stronger evidence base on the current
and inappropriate on moral or prudential grounds uses and impacts of ADA-based technologies, especially
(e.g. automating complex tasks involved in education, around key tensions and as they affect marginalised
warfare, immigration, justice, and relationships may be or underrepresented groups.
offensive even if narrowly effective). Understanding the
perspectives of a wide range of different groups on this Understanding specific applications of ADA-based
question will be especially important, because the activities technologies will help us to think more concretely about
that are highly valued in one age group or culture may be where and how tensions between values are most likely
very different from another. to arise, and how they might be resolved. Evidence
on current societal impacts of technology will provide
If we can begin to agree on some tasks that it would a stronger basis on which to assess the risks, and to
be beneficial to automate, then we can begin to collect predict possible future impacts.
evidence on current technological capabilities in these areas,
and to assess what is needed to make progress. By contrast, • Building on existing public engagement work to better
if we can more clearly identify the abilities that are central understand the perspectives of different members
to human flourishing or we otherwise do not want to of society on important issues and trade-offs.
automate, then measuring current capabilities in these
areas can help us better assess any threats, and think As we have emphasised, navigating the ethical and societal
about potential responses. implications of ADA requires us to acknowledge tensions
between values that use of these technologies promote,
In addition to research on the tensions we identify and values they might threaten. Since different publics will
here, we also welcome bold multidisciplinary studies be affected differently by technology, and may hold different
of tensions that explore more radical political and values, resolving these tensions requires us to understand
technological changes to come: for example, how ADA- varied public opinion on questions related to tensions
based technologies could look if they were not pursued and trade-offs.
primarily for profit or for geopolitical advantage, and
what socio-economic arrangements alternative to
capitalism these technologies could make possible.

5.4 Summary and recommendations

We recommend that research and policy work on


the ethics of ADA-based technologies should invest in
developing a stronger evidence-base around (a) current
and potential technological capabilities, and (b) societal
attitudes and needs, identifying and challenging the many
assumptions. In particular:

• Deepening understanding of technological capabilities


and limitations in areas particularly relevant to key ethical
and societal issues.

Often discussion of ethical and societal issues is founded


on unexamined assumptions about what is currently
technologically possible. To assess confidently the risks and
opportunities of ADA for society, and to think more clearly
about trade-offs between values, we need more critical
examination of these assumptions.
36

6. Conclusion: A roadmap for research

In this report, we have explored the state of current between stakeholders and disciplines, and encouraging
research and debates on ethical and societal impacts of consistent and productive criticism that provides relevant
algorithms, data, and AI, to identify what has been achieved and practical knowledge. The point of this knowledge base
so far and what needs to be done next. is to improve the standards, regulations, and systems of
oversight of the ADA technologies, which are currently
In section 2, we identified a number of key concepts used uncertain and in flux. We urge that new approaches to
to categorise the issues raised by ADA-based technologies governance and regulation be duly sensitive to the tensions
and a number of ethical principles and values that most described above and devise legitimate and inclusive
actors agree are important. We also identified three key institutions that will help communities to identify, articulate,
tasks that we believe need to be prioritised in order to and navigate these tensions, and others as they arise, in
move these discussions forward, namely: the context of greater and more pervasive automation
of their lives.
• Task 1 – Concept building: Addressing the
vagueness and ambiguities in the central concepts
used in discussions of ADA, identifying important Questions for research
differences in how terms are used and understood
across disciplines, sectors, publics and cultures, and Task 1: Concept Building
working to build bridges and consensus around these To clarify and resolve ambiguities and disagreements in the
where possible. use of key terms:

• Task 2 – Resolving tensions and trade-offs: Recognising • What are the different meanings of key terms in debates
and articulating tensions between the different principles about ADA? Such terms include, but are not limited to:
and values at stake in debates about ADA, determining fairness, bias, discrimination, transparency, explainability,
which of these tensions can be overcome through better interpretability, privacy, accountability, dignity, solidarity,
technologies or other practical solutions, and developing convenience, empowerment, and self-actualisation.
legitimate methods for the resolution of any trade-offs
that have to be made. • How are these terms used interchangeably, or with
overlapping meaning?
• Task 3 – Developing an evidence base: Building a
stronger evidence base on technological capabilities, • Where are different types of issues being conflated under
applications, and societal needs relevant to ADA, and similar terminology?
using these to resolve tensions and trade-offs.
• How are key terms used divergently across disciplines,
Throughout the report, we have made a number of sectors, cultures and publics?
recommendations and suggested questions for research
relevant to achieving each of these tasks. We summarise To build conceptual bridges between disciplines
these below. These are by no means meant to be exhaustive and cultures:
of the questions that could be fruitfully pursued in relation
to the ethical and societal impacts of ADA. However, they • What other cultural perspectives, particularly those from
highlight areas where we believe there is a strong potential the developing world and marginalised groups, are not
for future research to provide high-value contributions to currently strongly represented in research and policy
this field. work around ADA ethics? How can these perspectives
be included, for example by translating relevant policy
We envisage the study of the ethical and societal impacts and research literature, or by building collaborations on
of ADA as a pluralistic interdisciplinary and intersectoral specific issues?
enterprise, drawing on the best of the available methods
of the humanities, social sciences and technical disciplines, • What relevant academic disciplines are currently
as well as the expertise of practitioners. Together, the underrepresented in research on ADA ethics, and what
recommendations yield a roadmap for research that strikes kinds of interdisciplinary research collaborations could
a balance between respecting and learning from differences help include these disciplines?
37

To build consensus and manage disagreements: –– How much should we restrict personalisation
of advertising and public services for the sake
• Where ambiguities and differences in use of key of preserving ideals of citizenship and solidarity?
terms exist, how can consensus and areas of common
understanding be reached? –– What risks to privacy and informational autonomy
is it acceptable to incur for the sake of better disease
• Where consensus cannot easily be reached, how can screening or greater public health?
we acknowledge, and work productively with, important
dimensions of disagreement? –– What kinds of skills should always remain in human
hands, and therefore where should we reject
Task 2: Tensions and Trade-offs innovative automation technologies?
To better understand the four central tensions:
To identify new tensions beyond those highlighted
• To what extent are we facing true dilemmas, dilemmas in this report:
in practice, or false dilemmas?
• Where might the harms and benefits of ADA-
• For the four central tensions, this includes asking: based technologies be unequally distributed across
different groups?
–– How can the most accurate predictive algorithms
be used in a way that does not violate fairness • Where might uses of ADA-based technologies present
and equality? opportunities in the near term but risk compromising
important values in the long term?
–– How can we get the benefits of personalisation
and respect the ideals of solidarity and citizenship? • Where might we be thinking too narrowly about the
impacts of technology? Where might applications that
–– How can we use personal data to improve public are beneficial from a narrow or individualistic view
services and preserve or enhance privacy and produce negative externalities?
informational autonomy?
Task 3: Developing an evidence base
–– How can we use automation to make our lives To deepen our understanding of technological capabilities
more convenient and at the same time promote and limitations:
self-actualisation and dignity?
Overarching questions
To legitimate trade-offs:
• What do we need to understand about technological
• How do we best give voice to all stakeholders capabilities and limitations in order to assess meaningfully
affected by ADA and articulate their interests the risks and opportunities they pose in different ethical
with rigour and respect? and societal contexts?

• What are acceptable and legitimate trade-offs that • How might advances in technological capabilities
are compatible with rights and entitlements of those help resolve tensions between values in applications
affected by these technologies? of ADA, and what are the limitations of technology
for this purpose?
• Which mechanisms of resolution are most likely
to receive broad acceptance? Applying these overarching questions to our four
specific tensions:
• For the four central tensions, this includes asking:
• Accuracy versus fair and equal treatment
–– When, if ever, is it acceptable to use an algorithm –– To what extent does accuracy trade off against
that performs worse for a specific subgroup, different definitions of fairness?
if that algorithm is more accurate on average –– What forms of interpretability are desirable from
across a population? the perspective of different stakeholders?
38

–– What forms of interpretability can be ensured Applying these overarching questions to our four
in state-of-the-art models? specific tensions:
–– To what extent is it possible to ensure adequate
interpretability without sacrificing accuracy • Accuracy versus fair and equal treatment
(or other properties, e.g. privacy)? –– In what sectors and applications are ADA being used
to inform decisions/predictions with implications for
• Personalisation versus solidarity and citizenship people’s lives?
–– Are there any in-principle or in-practice limits –– Is it possible to determine how often these
to how fine-grained personalisation can become result in differential treatment of different socially
(using current or foreseeable technology)? salient groups?
–– To what extent does personalisation meaningfully –– How easy to interpret are the algorithms being used
affect relevant outcomes (e.g. user satisfaction, to inform decisions that have implications for people’s
consumer behaviour, voting patterns)? lives? And what recourse do individuals have for
challenging decisions?
• Quality and efficiency of services versus privacy
and informational autonomy • Personalisation versus solidarity and citizenship
–– How much could machine learning and ‘big data’ –– What kinds of messages, interventions and services are
improve different public services? Can potential already being personalised using machine learning, and
gains be quantified? in what sectors?
–– To what extent do current methods allow the –– How ‘fine-grained’ is this personalisation, and on what
use of personal data in aggregate, while protecting kinds of categories is it based?
the privacy of individuals’ data? –– What evidence is there that this personalisation can
–– What are the best methods for ensuring substantially affect attitudes or behaviour?
meaningful consent?
• Quality and efficiency of services versus privacy and
• Convenience versus self-actualisation and dignity informational autonomy
–– What types of tasks can feasibly be automated –– In what specific sectors and applications are ADA
using current or foreseeable technologies? being used to improve the efficiency of public services?
–– What would the costs (e.g. energy and infrastructure –– What impacts are these specific applications having
requirements) be for widespread automation of on autonomy and privacy?
a given task?
• Convenience versus self-actualisation and dignity
To build a stronger evidence base on the current uses –– What effects is automation already having on daily
and impacts of technology: living activities of different publics?

Overarching questions To better understand the perspectives of different


interest groups:
• Across different sectors (energy, health, law, etc.), what
kinds of ADA-based technologies are already being used, Overarching questions:
and to what extent?
• What are the publics’ preferences about understanding
• What are the societal impacts of these specific a given technology (including its mechanisms, purposes,
applications, in particular on groups that might owners and creators, etc.)?
be disadvantaged (such as people of colour),
underrepresented (such as women) or vulnerable • If algorithms are being used as part of making decisions
(such as children or older people)? that significantly impact people’s lives, what kinds of
explanations of these decisions would people like to
be able to access? Does this differ depending on the
type of decision, or who is ultimately in charge of it?
39

• What do different publics see as the biggest


opportunities and risks of different technologies,
and how do they think about trade-offs between
the two? How does this differ based on demographic
factors? How does this differ based on people’s
personal experience with different technologies?

Applying these overarching questions to our four


specific tensions:

• Accuracy versus fair and equal treatment


–– How do different publics experience differential
effectiveness of a technology?
–– What do people consider to be ‘fair and equal
treatment’ in different contexts?

• Personalisation versus solidarity and citizenship


–– In what contexts do people seek out or endorse
individualised information or options specifically
tailored to a certain ‘profile’ they fit?
–– How do people experience changes in the public
sphere due to automation?

• Quality and efficiency of services versus privacy and


informational autonomy
–– When do publics endorse the use of their personal
data to make public services more efficient?
–– How are these attitudes different depending on
exactly what data is being used, who is making
use of it, and for what purpose?
–– How do these attitudes differ across groups?

• Convenience versus self-actualisation and dignity


–– What tasks and jobs are people most concerned
about losing to automation? How do answers to
this question differ by demographic factors?
–– In the light of increasing automation, what would
ideal working patterns be?
• How would people like to interact with ADA
technologies in the workplace?
• Which tasks is it ethically and prudentially
appropriate for technologies to take over?
40

Bibliography
Acs, G., Melis, L., Castelluccia, C., & De Cristofaro, E. Anderson, M., & Anderson, S. L. (2007). The status of Bloomberg News. (2018). China Now Has the Most
(2018). Differentially private mixture of generative machine ethics: a report from the AAAI Symposium. Valuable AI Startup in the World.
neural networks. IEEE Transactions on Knowledge Minds and Machines, 17(1): 1–10.
and Data Engineering. Boddington, P., Millican, P., & Wooldridge, M. (2017).
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Minds and Machines Special Issue: Ethics and Artificial
Adams, F. and Aizawa, K. (2001). The Bounds of Machine bias. ProPublica, May, 23. Intelligence. Minds and Machines, 27(4): 569–574.
Cognition. Philosophical Psychology, 14(1): 43–64.
Ovanessoff, A. and Plastino, E. (2017). How Can Bogaerts, B., Vennekens, J., & Denecker, M. (2017).
Adel, T., Ghahramani, Z., & Weller, A. (2018). AI Drive South America’s Growth? Accenture Safe inductions: An algebraic study. Paper presented
Discovering Interpretable Representations Research Report. at the Proceedings of the Twenty-Sixth International
for Both Deep Generative and Discriminative Joint Conference on Artificial Intelligence (IJCAI).
Models. In International Conference on Machine Arney, C. (2016). Our Final Invention: Artificial
Learning: 50–59. Intelligence and the End of the Human Era. Bostrom, N. (2003). Ethical issues in advanced artificial
Mathematics and Computer Education, 50(3): 227. intelligence. Science Fiction and Philosophy: From Time
Aditya, S. (2017). Explainable Image Understanding Travel to Superintelligence, 277–284.
Using Vision and Reasoning. Paper presented at ASI Data Science and Slaughter & May. (2017).
the AAAI. Superhuman Resources: Responsible Deployment of AI in Bostrom, N. (2014). Superintelligence. Oxford
Business. University Press.
Aha, D. W., & Coman, A. (2017). The AI Rebellion:
Changing the Narrative. Paper presented at the AAAI. Australian Computing Society. (2017). Australia’s Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P.,
Digital Pulse in 2017. Garfinkel, B., Filar, B. (2018). The malicious use of artificial
AI Now Institute. (2017). AI Now Symposium intelligence: Forecasting, prevention, and mitigation. arXiv
2017 Report. Barocas, S. (2014). Data mining and the preprint arXiv:1802.07228.
discourse on discrimination. Proceedings
Alekseev, A. (2017). Artificial intelligence and ethics: of the Data Ethics Workshop, Conference Burch, K. T. (2012). Democratic transformations:
Russian theory and communication practices. Russian on Knowledge Discovery and Data Mining Eight conflicts in the negotiation of American identity.
Journal of Communication, 9(3): 294–296. (KDD). https://dataethics.github.io/proceedings/ A&C Black.
DataMiningandtheDiscourseOnDiscrimination.pdf
Alexandrova A. and D. Haybron (2011) High fidelity Burns, T.W., O’Connor, D.J., & Stocklmayer, S.M. (2003)
economics, in Elgar Companion to Recent Economic Barocas, S., & Selbst, A. D. (2016). Big data’s disparate Science communication: a contemporary definition.
Methodology (edited by John Davis and Wade Hands), impact. California Law Review, 104: 671. Public Understanding of Science, 12: 183–202.
Edward Elgar, 94–117.
Becker, B. (2006). Social robots-emotional agents: Burrell, J. (2016). How the machine ‘thinks’:
Alexandrova A. (2017) A Philosophy for the Science Some remarks on naturalizing man-machine Understanding opacity in machine learning algorithms.
of Well-being, New York: Oxford University Press. interaction. International Review of Information Big Data & Society, 3(1), 1–12.
Ethics 6: 37–45.
Alexandrova, A. (2018) Can the Science of Well-Being Burton, E., Goldsmith, J., Koenig, S., Kuipers, B., Mattei,
Be Objective?, The British Journal for the Philosophy Bei, X., Chen, N., Huzhang, G., Tao, B., & Wu, J. (2017). N., & Walsh, T. (2017). Ethical considerations in artificial
of Science, 69(2): 421–445. https://doi.org/10.1093/ Cake cutting: envy and truth. Paper presented at the intelligence courses. arXiv preprint arXiv:1701.07769.
bjps/axw027 Proceedings of the 26th International Joint Conference
on Artificial Intelligence. Bygrave, L. A. (2001). Automated profiling: minding the
Altman, A. 2015. Discrimination, The Stanford machine: article 15 of the ec data protection directive
Encyclopedia of Philosophy (Winter 2016 Edition), Bei, X., Qiao, Y., & Zhang, S. (2017). Networked fairness and automated profiling. Computer Law & Security
Edward N. Zalta (ed.), https://plato.stanford.edu/ in cake cutting. arXiv preprint arXiv:1707.02033. Review, 17(1): 17–24.
entries/discrimination/
Belle, V. (2017). Logic meets probability: towards Calders, T., & Verwer, S. (2010). Three naive Bayes
Alkoby, S., & Sarne, D. (2017). The Benefit in Free explainable AI systems for uncertain worlds. Paper approaches for discrimination-free classification. Data
Information Disclosure When Selling Information presented at the Proceedings of the Twenty- Mining and Knowledge Discovery, 21(2): 277–292.
to People. Paper presented at the AAAI. Sixth International Joint Conference on Artificial
Intelligence, IJCAI. Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M.,
Allen, C., Varner, G., & Zinser, J. (2000). Prolegomena & Elhadad, N. (2015). Intelligible models for healthcare:
to any future artificial moral agent. Journal of Bess, M. (2010). Enhanced Humans versus “Normal Predicting pneumonia risk and hospital 30-day
Experimental & Theoretical Artificial Intelligence, People”: Elusive Definitions, The Journal of Medicine readmission. Paper presented at the Proceedings
12(3): 251–261. and Philosophy: A Forum for Bioethics and Philosophy of the 21th ACM SIGKDD International Conference
of Medicine, 35(6): 641–655. https://doi.org/10.1093/ on Knowledge Discovery and Data Mining.
American Honda Motor Co. (2017). ASIMO: jmp/jhq053
The World’s Most Advanced Humanoid Robot. Cave, S. (2017) Intelligence: A History. Aeon.
Available online: http://asimo.honda.com Binns, R. (2017). Fairness in Machine Learning:
Lessons from Political Philosophy. arXiv preprint Cave, S., & Dihal, K. (2018) Ancient dreams
Anderson, B., & Horvath, B. (2017). The rise of arXiv:1712.03586. of intelligent machines: 3,000 years of robots.
the weaponized ai propaganda machine. Scout, Nature, 559: 473–475.
February, 12. Biran, O., & McKeown, K. R. (2017). Human-Centric
Justification of Machine Learning Predictions. Paper
presented at the IJCAI.
41

Cech, E. A. (2014). Culture of disengagement in Crawford, K., & Calo, R. (2016). There is a blind spot Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M.,
engineering education? Science, Technology, & Human in AI research. Nature, 538(7625): 311. Blau, H. M., & Thrun, S. (2017). Dermatologist-level
Values, 39(1): 42–72. classification of skin cancer with deep neural networks.
Crenshaw, K. (1991). Mapping the Margins: Nature, 542(7639): 115.
Chace, C. (2015). Surviving AI: The promise and peril Intersectionality, Identity Politics, and Violence
of artificial intelligence. Bradford: Three Cs Publishing. Against Women of Color. Stanford Law Review, EU EDPS Ethics Advisory Group. (2018). Towards
43(6): 1241–1299. a digital ethics.
Chakraborti, T., Sreedharan, S., Zhang, Y.,
& Kambhampati, S. (2017). Plan explanations as model Dafoe, A. (2018). AI Governance: A Research Agenda. Eubanks, V. (2018). Automating inequality: How
reconciliation: Moving beyond explanation as soliloquy. University of Oxford. high-tech tools profile, police, and punish the poor.
arXiv preprint arXiv:1701.08317. St. Martin’s Press.
Daly, A. (2016). Private power, online information flows
Chierichetti, F., Kumar, R., Lattanzi, S., & Vassilvitskii, S. and EU law: Mind the gap. Bloomsbury Publishing. European Group on Ethics in Science and New
(2017). Fair clustering through fairlets. Paper Technologies. (2018). Statement on Artificial
presented at the Advances in Neural Information Danks, D., & London, A. J. (2017). Algorithmic bias Intelligence, Robotics and “Autonomous” Systems.
Processing Systems. in autonomous systems. Paper presented at the
Proceedings of the Twenty-Sixth International Joint Fast, E., & Horvitz, E. (2017). Long-Term Trends in
Chouldechova, A. (2017). Fair prediction with disparate Conference on Artificial Intelligence. the Public Perception of Artificial Intelligence. Paper
impact: A study of bias in recidivism prediction presented at the AAAI.
instruments. Big data, 5(2): 153–163. Davies, J. (2016). Program good ethics into artificial
intelligence. Nature News. Feldman, M., Friedler, S. A., Moeller, J., Scheidegger,
Clark, A. (1996). Being There: Putting Brain, Body, and C., & Venkatasubramanian, S. (2015). Certifying
World Together Again. Cambridge: MIT Press. Dawkins, R. (1982). The Extended Phenotype. New York: and removing disparate impact. Paper presented
Oxford Press. at the Proceedings of the 21th ACM SIGKDD
Clark, A. (2008). Supersizing the Mind: Embodiment, International Conference on Knowledge Discovery
Action, and Cognitive Extension. New York: Oxford and Data Mining.
Devlin, H. (2017). AI programs exhibit racial and
University Press. gender biases, research reveals. The Guardian, 13.
Fish, B., Kun, J., & Lelkes, Á. D. (2016). A confidence-
Clark, A. and D. Chalmers. (1998). The Extended Mind. based approach for balancing fairness and accuracy.
Dietterich, T. G. (2017). Steps toward robust artificial
Analysis, 58: 7–19. Paper presented at the Proceedings of the 2016 SIAM
intelligence. AI Magazine, 38(3): 3–24.
International Conference on Data Mining.
Clifford, D., Graef, I., & Valcke, P. (2018). Pre-Formulated Dignum, V. (2018). Ethics in artificial intelligence:
Declarations of Data Subject Consent–Citizen-Consumer Fisher, D. H. (2017). A Selected Summary of AI
introduction to the special issue.
Empowerment and the Alignment of Data, Consumer for Computational Sustainability. Paper presented
and Competition Law Protections. at the AAAI.
Ding, J. (2018). Deciphering China’s AI Dream. University
of Oxford.
Coeckelbergh, M., Pop, C., Simut, R., Peca, A., Pintea, Forster, E. M. (1947). Collected short stories of EM
S., David, D. & Vanderborght, B. (2016). A Survey Forster. Sidgwick and Jackson.
Dunbar, M. (2017). To Be a Machine: Adventures
of Expectations About the Role of Robots in Robot- Among Cyborgs, Utopians, Hackers, and the Futurists
Assisted Therapy for Children with ASD: Ethical Frank, R. H. (2000). Why is cost-benefit analysis so
Solving the Modest Problem of Death. The Humanist,
Acceptability, Trust, Sociability, Appearance, and controversial? The Journal of Legal Studies, 29(S2):
77(3): 42.
Attachment. Science and Engineering Ethics 22 (1): 913–930.
47–65.
Dwork, C. (2008). Differential privacy: A survey
Freuder, E. C. (2017). Explaining Ourselves:
of results. Paper presented at the International
Coggon, J., and J. Miola. (2011). Autonomy, Liberty, and Human-Aware Constraint Reasoning. Paper
Conference on Theory and Applications of
Medical Decision-Making. The Cambridge Law Journal, presented at the AAAI.
Models of Computation.
70(3): 523–547.
Frey, C. B., & Osborne, M. A. (2017). The future
Dwork, C. (2017). What’s Fair? Paper presented
Collins, S. and A. Ruina. (2005). A bipedal walking of employment: how susceptible are jobs to
at the Proceedings of the 23rd ACM SIGKDD
robot with efficient and human-like gait. Proceedings computerisation? Technological forecasting and
International Conference on Knowledge Discovery
IEEE International Conference on Robotics and social change, 114: 254–280.
and Data Mining.
Automation, Barcelona, Spain.
Friedler, S. A., Scheidegger, C., & Venkatasubramanian, S.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel,
Conitzer, V., Sinnott-Armstrong, W., Borg, J. S., Deng, (2016). On the (im) possibility of fairness. arXiv preprint
R. (2012). Fairness through awareness. Paper
Y., & Kramer, M. (2017). Moral Decision Making arXiv:1609.07236.
presented at the Proceedings of the 3rd innovations
Frameworks for Artificial Intelligence. Paper presented in theoretical computer science conference.
at the AAAI. Friedman, B., & Nissenbaum, H. (1996). Bias in
computer systems. ACM Transactions on Information
Edwards, L., & Veale, M. (2017). Slave to the Algorithm:
Cowls, J., & Floridi, L. (2018). Prolegomena to a White Systems (TOIS), 14(3): 330–347.
Why a Right to an Explanation Is Probably Not the
Paper on an Ethical Framework for a Good AI Society. Remedy You Are Looking for. Duke Law and Technology
SSRN Electronic Journal. Future Advocacy and The Wellcome Trust. (2018).
Review 16(1): 18.
Ethical, social and political challenges of artificial
Crawford, K. (2016). Artificial intelligence’s white guy intelligence in health.
Ess, C. (2006). Ethical pluralism and global information
problem. The New York Times, 25. ethics. Ethics and Information Technology, 8(4): 215–226.
42

Garrett, R. K., E. C. Nisbet, and E. K. Lynch. (2013). Grgić-Hlača, N., Redmiles, E. M., Gummadi, K. P., Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner,
Undermining the corrective effects of media-based & Weller, A. (2018). Human perceptions of fairness in M., Hofstetter, Y., Zwitter, A. (2017). Will democracy
political fact checking? The role of contextual cues and algorithmic decision making: A case study of criminal risk survive big data and artificial intelligence? Scientific
naïve theory. Journal of Communication 63(4): 617–637. prediction. arXiv preprint arXiv:1802.09548. American, 25.

Garrett, R. K., E. C. Weeks, and R. L. Neo. (2016). Greenwald, A. G. (2017). An AI stereotype catcher. Hilton, M. Differential privacy: a historical survey.
Driving a wedge between evidence and beliefs: How Science, 356(6334): 133–134. Cal Poly State University.
online ideological news exposure promotes political
misperceptions. Journal of Computer-Mediated Gribbin, J. (2013). Computing with quantum cats: From House of Commons Science and Technology
Communication 21(5): 331–348. Colossus to Qubits. Random House. Committee, The Big Data Dilemma. 12 February 2016,
HC 468 2015–16.
Gellert, R. (2015). Data protection: a risk regulation? Gunkel, D. J., & Bryson, J. (2014). Introduction to
Between the risk management of everything and the the special issue on machine morality: The machine Hurley, S. L. (1998). Vehicles, Contents, Conceptual
precautionary alternative. International Data Privacy as moral agent and patient. Philosophy & Technology, Structure and Externalism. Analysis 58: 1–6.
Law, 5(1): 3. 27(1): 5–8.
IEEE Global Initiative on Ethics of Autonomous and
Gellert, R. (2018). Understanding the notion of risk Hadfield-Menell, D., Dragan, A., Abbeel, P., & Russell, Intelligent Systems. (2018) Ethically aligned design:
in the General Data Protection Regulation. Computer S. (2016). The off-switch game. arXiv preprint a vision for prioritizing human wellbeing with artificial
Law & Security Review, 34(2): 279–288. arXiv:1611.08219. intelligence and autonomous systems.

Goh, G., Cotter, A., Gupta, M., & Friedlander, M. P. Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic Imberman, S. P., McManus, J., & Otts, G. (2017).
(2016). Satisfying real-world goals with dataset bias: From discrimination discovery to fairness-aware Creating Serious Robots That Improve Society. Paper
constraints. Paper presented at the Advances in data mining. Paper presented at the Proceedings of presented at the AAAI.
Neural Information Processing Systems. the 22nd ACM SIGKDD international conference on
knowledge discovery and data mining. Institute of Technology and Society in Rio. (2017).
Goldsmith, J., & Burton, E. (2017). Why Teaching Ethics Big Data in the Global South: Report on the Brazilian
to AI Practitioners Is Important. Paper presented at Hajian, S., & Domingo-Ferrer, J. (2013). A methodology Case Studies.
the AAAI. for direct and indirect discrimination prevention in
data mining. IEEE transactions on knowledge and data Ipsos MORI and the Royal Society. (2017). Public
Goodman, B., & Flaxman, S. (2016). European Union engineering, 25(7): 1445–1459. views of Machine Learning.
regulations on algorithmic decision-making and a “right to
explanation”. arXiv preprint arXiv:1606.08813. Hajian, S., Domingo-Ferrer, J., Monreale, A., Pedreschi, Jabbari, S., Joseph, M., Kearns, M., Morgenstern, J.,
D., & Giannotti, F. (2015). Discrimination-and privacy- & Roth, A. (2016). Fairness in reinforcement learning.
Government Office for Science. (2016). Artificial aware patterns. Data Mining and Knowledge Discovery, arXiv preprint arXiv:1611.03071.
intelligence: opportunities and implications for the future 29(6): 1733–1782.
of decision making. Johndrow, J. E., & Lum, K. (2017). An algorithm for
Hanisch, C. (1969). The personal is political. Available removing sensitive information: application to race-
Government Office for Science. (2017). The Futures at www.carolhanisch.org/CHwritings/PIP.html independent recidivism prediction. arXiv preprint
Toolkit: Tools for Futures Thinking and Foresight Across arXiv:1703.04957.
UK Government. Hanisch, C. (2006). The personal is political: The
women’s liberation movement classic with a new Jotterand, F. and V. Dubljevic (Eds). (2016). Cognitive
GPI Atlantic. (1999). Gender Equality in the Genuine explanatory introduction. Women of the World, Unite. Enhancement: Ethical and Policy Implications in
Progress Index. Made to Measure Symposium International Perspectives. Oxford University Press.
Synthesis Paper, Halifax, October 3–6. Harari, Y. N. (2016). Homo Deus: A brief history
of tomorrow. Random House. Kamarinou, D., Millard, C., & Singh, J. (2016). Machine
Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. Learning with Personal Data. Queen Mary School of
(2018). When Will AI Exceed Human Performance? Hardt, M., Price, E., & Srebro, N. (2016). Equality Law Legal Studies Research Paper, 247.
Evidence from AI Experts. Journal of Artificial of opportunity in supervised learning. Paper
Intelligence Research, 62: 729–754. presented at the Advances in neural information Kamiran, F., & Calders, T. (2012). Data preprocessing
processing systems. techniques for classification without discrimination.
Graef, I. (2016). EU Competition Law, Data Protection Knowledge and Information Systems, 33(1), 1–33.
and Online Platforms: Data as Essential Facility: Kluwer Harel, Y., Gal, I. B., & Elovici, Y. (2017). Cyber Security
Law International. and the Role of Intelligent Systems in Addressing its Kamiran, F., Calders, T., & Pechenizkiy, M. (2010).
Challenges. ACM Transactions on Intelligent Systems Discrimination aware decision tree learning. Paper
Grafman, J. and I. Litvan. (1999). Evidence for and Technology (TIST), 8(4): 49. presented at the Data Mining (ICDM), 2010 IEEE
Four Forms of Neuroplasticity. In Neuronal 10th International Conference on Computer and
Plasticity: Building a Bridge from the Laboratory Haybron, D. M., & Alexandrova, A. (2013). Paternalism Information Technology.
to the Clinic. J. Grafman and Y. Christen (eds.). in economics. Paternalism: Theory and practice, (eds
Springer-Verlag Publishers. Christian Coons and Michael Weber), Cambridge Kamiran, F., Karim, A., & Zhang, X. (2012). Decision
University Press, 157–177. theory for discrimination-aware classification. Paper
Grbovic, M., Radosavljevic, V., Djuric, N., Bhamidipati, presented at the Data Mining (ICDM), 2012 IEEE
N., & Nagarajan, A. (2015). Gender and interest Helberger, N., Zuiderveen Borgesius, F. J., & Reyna, 12th International Conference on Dating Mining.
targeting for sponsored post advertising at tumblr. A. (2017). The perfect match? A closer look at the
Paper presented at the Proceedings of the 21th ACM relationship between EU consumer law and data Kamiran, F., Žliobaitė, I., & Calders, T. (2013).
SIGKDD International Conference on Knowledge protection law. Quantifying explainable discrimination and removing
Discovery and Data Mining. illegal discrimination in automated decision making.
Knowledge and Information systems, 35(3): 613–644.
43

Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Lehmann, H., Iacono, I., Dautenhahn, K., Marti, P. and Mindell, D. (2002). Between human and machine:
Fairness-aware classifier with prejudice remover Robins, B. (2014). Robot companions for children with feedback, control, and computing before cybernetics.
regularizer. Paper presented at the Joint European down syndrome: A case study. Interaction Studies. Social Baltimore: Johns Hopkins University Press.
Conference on Machine Learning and Knowledge Behaviour and Communication in Biological and Artificial
Discovery in Databases. Systems 15(1), pp. 99–112. Minsky, M. (1982). Semantic information processing:
MIT Press.
Kaplan, J. (2015). Humans need not apply: A guide Levy, N. Rethinking Neuroethics in the Light of the
to wealth and work in the age of artificial intelligence. Extended Mind Thesis, The American Journal of Bioethics, Minton, S. N. (2017). The Value of AI Tools: Some
Yale University Press. 7(9): 3–11. Lessons Learned. AI Magazine, 38(3).

Kaplan, J. (2016). Artificial Intelligence: What everyone Lewis-Kraus, G. (2016). The great AI awakening. Monbiot, G. (2017). Big data’s power is terrifying. That
needs to know. Oxford University Press. The New York Times Magazine, 14. could be good news for democracy. The Guardian.

Kearns, M., Roth, A., & Wu, Z. S. (2017). Meritocratic Li, Fei-Fei. (2018). How to Make A.I. That’s Good for Montréal Declaration on Responsible AI. (2018).
fairness for cross-population selection. Paper People. The New York Times. Montréal Declaration for a Responsible Development
presented at the International Conference on of Artificial Intelligence. Available at
Machine Learning. Lipton, Z. C. (2016). The Mythos of Model www.montrealdeclaration-responsibleai.com/
Interpretability. ICML 2016 Workshop on Human the-declaration
Kleinberg, J., Ludwig, J., Mullainathan, S. (2016). Interpretability in Machine Learning.
A Guide to Solving Social Problems with Machine Moore, A. (2017). Critical elitism: Deliberation,
Learning. Harvard Business Review. Luong, B. T., Ruggieri, S., & Turini, F. (2011). k-NN democracy, and the problem of expertise. Cambridge
as an implementation of situation testing for University Press.
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). discrimination discovery and prevention. Paper
Inherent trade-offs in the fair determination of risk scores. presented at the Proceedings of the 17th ACM Moravec, H. (1988). Mind children: The future of robot
arXiv preprint arXiv:1609.05807. SIGKDD international conference on Knowledge and human intelligence. Harvard University Press.
discovery and data mining.
Koops, B. (2013). On decision transparency, or how to Mukherjee, S. (2017). A.I. Versus M.D.: What happens
enhance data protection after the computational turn. Lyons, J. B., Clark, M. A., Wagner, A. R., & Schuelke, M. when a diagnosis is automated? The New Yorker.
Privacy, due process and the computational turn: the J. (2017). Certifiable Trust in Autonomous Systems:
philosophy of law meets the philosophy of technology, Making the Intractable Tangible. AI Magazine, 38(3). Müller, V. C. (2014). Risks of artificial general
189–213. intelligence. Journal of Experimental and Theoretical
Marcus, G. (2012). Will a Robot Take Your Job? Artificial Intelligence, 26(3): 297-301.
Kraemer, F., Van Overveld, K., & Peterson, M. (2011). The New Yorker.
Is there an ethics of algorithms? Ethics and Information Noble, S. U. (2018). Algorithms of Oppression: How
Technology, 13(3): 251–260. Marcus, G. (2013). Why we should think about search engines reinforce racism. NYU Press.
the threat of artificial intelligence. The New Yorker.
Kristoffersson, A., Coradeschi, S., Loutfi, A., Noë, A. (2009). Out of our heads. Hill and Wang.
& Severinson-Eklundh, K. (2014). Assessment Marien, M. (2014). The second machine age: Work,
of interaction quality in mobile robotic telepresence: progress, and prosperity in a time of brilliant Novitske, L (2018). The AI Invasion is Coming to Africa
An elderly perspective. Interaction Studies 15(2): technologies. Cadmus, 2(2): 174. and It’s a Good Thing. Stanford Social Innovation Review.
343–357.
Mattu, S. and Hill, K. (2018) The House That Spied on Nushi, B., Kamar, E., Horvitz, E., & Kossmann, D.
Kuner, C., Svantesson, D. J. B., Cate, F. H., Lynskey, Me. Gizmodo. (2017). On Human Intellect and Machine Failures:
O., & Millard, C. (2017). Machine learning with
Troubleshooting Integrative Machine Learning Systems.
personal data: is data protection law smart enough McAllister, R., Gal, Y., Kendall, A., Van Der Wilk, M., Shah, Paper presented at the AAAI.
to meet the challenge? International Data Privacy A., Cipolla, R., & Weller, A. V. (2017). Concrete problems
Law, 7(1), 1–2. for autonomous vehicle safety: Advantages of Bayesian Omidyar Network and Upturn. (2018). Public scrutiny
deep learning. of automated decisions: early lessons and emerging
Kurzweil, R. (2013). How to create a mind: The secret
methods. Available online at www.omidyar.com/
of human thought revealed. Penguin. McFarland, D. (2009). Guilty robots, happy dogs: the insights/public-scrutiny-automated-decisions-early-
question of alien minds. Oxford University Press. lessons-and-emerging-methods
Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017).
Counterfactual fairness. Paper presented at the Mei, J.-P., Yu, H., Shen, Z., & Miao, C. (2017). A social O’Neil, C. (2016). Weapons of math destruction: How
Advances in Neural Information Processing Systems. influence based trust model for recommender big data increases inequality and threatens democracy.
systems. Intelligent Data Analysis, 21(2): 263–277. Broadway Books.
Kuzelka, O., Davis, J., & Schockaert, S. (2017). Induction
of interpretable possibilistic logic theories from relational Menary, R. (2007). Cognitive Integration: Mind and Open Data Institute. (2017). Helping organisations
data. arXiv preprint arXiv:1705.07095. Cognition Unbounded. Palgrave Macmillan. navigate concerns in their data practices. Available online
at https://theodi.org/article/data-ethics-canvas/
Kökciyan, N., & Yolum, P. (2017). Context-Based Mendoza, I., & Bygrave, L. A. (2017). The Right not to
Reasoning on Privacy in Internet of Things. Paper be Subject to Automated Decisions based on Profiling. Pagallo, U. (2017). From automation to autonomous
presented at the IJCAI. In EU Internet Law: 77–98. systems: a legal phenomenology with problems of
accountability. Paper presented at the Proceedings of
Langley, P., Meadows, B., Sridharan, M., & Choi, D. Milli, S., Hadfield-Menell, D., Dragan, A., & Russell, the Twenty-Sixth International Joint Conference on
(2017). Explainable Agency for Intelligent Autonomous S. (2017). Should robots be obedient? arXiv preprint Artificial Intelligence (IJCAI-17).
Systems. Paper presented at the AAAI. arXiv:1705.09990.
44

Parens, E. (1998). Enhancing human traits: Ethical and Studies. Social Behaviour and Communication in Shariff, A., Rahwan, I., and Bonnefon, J. (2016). Whose
social implications (Hastings Center studies in ethics). Biological and Artificial Systems 7(3): 509–542. Life Should Your Car Save? New York Times.
Washington, D.C.: Georgetown University Press.
Romei, A., & Ruggieri, S. (2014). A multidisciplinary Shell International BV. (2008). Scenarios:
Parens, E. (2015). Shaping ourselves: On technology, survey on discrimination analysis. The Knowledge An Explorer’s Guide.
flourishing, and a habit of thinking. Oxford Engineering Review, 29(5): 582–638.
University Press. Shirk, J. L., Ballard, H. L., Wilderman, C. C., Phillips, T.,
Ross, A. S., Hughes, M. C., & Doshi-Velez, F. (2017). Wiggins, A., Jordan, R., Krasny, M. E. (2012). Public
Pasquale, F. (2015). The black box society: The secret Right for the right reasons: Training differentiable models participation in scientific research: a framework
algorithms that control money and information. Harvard by constraining their explanations. arXiv preprint for deliberate design. Ecology and Society, 17(2).
University Press. arXiv:1703.03717.
Simon, H. (1969). The Sciences of the Artificial.
Pedreschi, D., Ruggieri, S., & Turini, F. (2009). Measuring Royal Society. (2017). Machine learning: the power MIT Press.
discrimination in socially-sensitive decision records. and promise of computers that learn by example.
Paper presented at the Proceedings of the 2009 Sintov, N., Kar, D., Nguyen, T., Fang, F., Hoffman, K., Lyet,
SIAM International Conference on Data Mining. Royal Society and The British Academy. (2017). Data A., & Tambe, M. (2017). Keeping It Real: Using Real-
management and use: Governance in the 21st century. World Problems to Teach AI to Diverse Audiences.
Phan, N., Wu, X., Hu, H., & Dou, D. (2017). Adaptive AI Magazine, 38(2).
laplace mechanism: differential privacy preservation Royal Society for the encouragement of Arts,
in deep learning. Paper presented at the 2017 IEEE Manufactures and Commerce (RSA). (2018). Artificial Such, J. M. (2017). Privacy and autonomous systems.
International Conference on Data Mining (ICDM). Intelligence: Real Public Engagement. Paper presented at the Proceedings of the 26th
International Joint Conference on Artificial Intelligence.
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., Russell, C., Kusner, M. J., Loftus, J., & Silva, R.
& Weinberger, K. Q. (2017). On fairness and (2017). When worlds collide: integrating different Sweeney, L. (2013). Discrimination in online ad
calibration. Paper presented at the Advances counterfactual assumptions in fairness. Paper delivery. Queue, 11(3): 10.
in Neural Information Processing Systems. presented at the Advances in Neural Information
Processing Systems. Tapus, A., Peca, A., Aly, A., Pop, C., Jisa, L., Pintea, S., Rusu,
Powles, J., & Hodson, H. (2017). Google DeepMind A. S. and David, D. O. (2012). Children with autism
and healthcare in an age of algorithms. Health and Schermer, M. (2013). Health, Happiness and Human social engagement in interaction with Nao, an imitative
technology, 7(4): 351–367. Enhancement – Dealing with Unexpected Effects of robot: A series of single case experiments. Interaction
Deep Brain Stimulation. Neuroethics, 6(3): 435–445. Studies 13(3): 315–347.
Prainsack, B., & Buyx, A. (2017). Solidarity in biomedicine
and beyond (Vol. 33). Cambridge University Press. Scheutz, M. (2017). The case for explicit ethical agents. Tegmark, M. (2017). Life 3.0: Being human in the age
AI Magazine, 38(4): 57–64. of artificial intelligence. Knopf.
National Science and Technology Council, Obama
White House. (2016). Preparing for the Future of Searle, J. R. (1980). Minds, Brains, and Programs. Tene, O., & Polonetsky, J. (2017). Taming the Golem:
Artificial Intelligence. Behavioral and Brain Sciences, 3: 417–457. Challenges of Ethical Algorithmic Decision-Making.
NC Journal of Law and Technology, 19(1): 125.
Purtova, N. (2018). The law of everything. Broad Selbst, A. D., & Barocas, S. (2018). The intuitive appeal
concept of personal data and future of EU data of explainable machines. Fordham Law Review. Thelisson, E. (2017). Towards trust, transparency,
protection law. Law, Innovation and Technology, and liability in AI/AS systems. Paper presented at
10(1): 40–81. the Proceedings of the 26th International Joint
Selbst, A. D., & Powles, J. (2017). Meaningful
information and the right to explanation. International Conference on Artificial Intelligence.
Quadrianto, N., & Sharmanska, V. (2017). Recycling Data Privacy Law, 7(4): 233–242.
privileged learning and distribution matching Tiberius, V. (2018). Well-Being As Value Fulfillment:
for fairness. Paper presented at the Advances How We Can Help Each Other to Live Well. Oxford
Select Committee on Artificial Intelligence. (2018). AI
in Neural Information Processing Systems. University Press.
in the UK: Ready, Willing, and Able? HL 100 2017–19.
London: House of Lords.
Reed, C., Kennedy, E., & Silva, S. (2016). Responsibility, Tolomei, G., Silvestri, F., Haines, A., & Lalmas, M. (2017).
Autonomy and Accountability: legal liability for Interpretable predictions of tree-based ensembles
Shapiro, L. A. (2004). The Mind Incarnate. MIT Press.
machine learning. Queen Mary School of Law Legal via actionable feature tweaking. Paper presented
Studies Research Paper No. 243/2016. Available at at the Proceedings of the 23rd ACM SIGKDD
Sharkey, A. (2014). Robots and human dignity:
SSRN: https://ssrn.com/abstract=2853462 International Conference on Knowledge Discovery
a consideration of the effects of robot care on
and Data Mining.
the dignity of older people. Ethics and Information
Resnick, B. (2018). Cambridge Analytica’s Technology 16(1): 63–75.
“psychographic microtargeting”: what’s bullshit Turkle, S. (2016). Reclaiming conversation: The power
and what’s legit. Vox. of talk in a digital age. Penguin.
Sharkey, A., and Sharkey, N. (2012). Granny and the
robots: ethical issues in robot care for the elderly.
Richert, A., Müller, S., Schröder, S., and Jeschke, S. Turkle, S. (2017). Alone together: Why we expect more
Ethics and Information Technology 14(1): 27–40.
(2018). Anthropomorphism in social robotics: from technology and less from each other. Hachette UK.
empirical results on human–robot interaction
Shirk, J. L., H. L. Ballard, C. C. Wilderman, T. Phillips,
in hybrid production workplaces. AI and Society
A. Wiggins, R. Jordan, E. McCallie, M. Minarchek, B. V. Vanderborght, B., Simut, R, Saldien, J., Pop, C., Rusu, A. S.,
33(3): 413–424.
Lewenstein, M. E. Krasny, and R. Bonney. (2012). Public Pintea, S., Lefeber, D. and David, D.O. (2012). Using the
participation in scientific research: a framework for social robot probo as a social story telling agent for
Robins, B., Dautenhahn, K., and Dubowski, J., (2006). children with ASD. Interaction Studies 13(3): 348–372.
deliberate design. Ecology and Society 17(2): 29.
Does appearance matter in the interaction of children http://dx.doi.org/10.5751/ES-04705–170229
with autism with a humanoid robot? Interaction
45

Varakin, D.A., Levin, D. T. and Fidler, R. (2004). Wilson, R. A. (1994). Wide Computationalism. Mind,
Unseen and unaware: Implications of recent research 103(411): 351–72.
on failures of visual awareness for human-computer
interface design. Human-Computer Interaction Wilson, R. A. and A. Clark. (2009). How
19(4): 389–422. to situate cognition: Letting nature take its
course. In Murat Aydede and P. Robbins (eds.),
Vempati, S. S. (2016). India and the Artificial The Cambridge Handbook of Situated Cognition.
Intelligence Revolution. Carnegie Endowment Cambridge: Cambridge University Press, 55–77.
for International Peace.
The Wilson Centre. (2017). Artificial Intelligence:
Vold, K. (2015). The Parity Argument for Extended A Policy-Oriented Introduction.
Consciousness. Journal of Consciousness Studies,
22(3–4): 16–33. Wood, L., Lehmann, H., Dautenhahn, K., Robins, B.,
Rainer, A., and Syrdal, D. (2016). Robot-mediated
Wachter, S. and Mittelstadt, B.D. (2018) A right to interviews with children. Interaction Studies
reasonable inferences: re-thinking data protection 17(3): 438–460.
in the age of big data and AI, Columbia Business
Law Review. World Wide Web Foundation. (2017). Artificial
Intelligence: Starting the Policy Dialogue in Africa.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017).
Why a right to explanation of automated decision- Yao, S., & Huang, B. (2017). Beyond parity:
making does not exist in the general data protection Fairness objectives for collaborative filtering.
regulation. International Data Privacy Law, 7(2): 76–99. Paper presented at the Advances in Neural
Information Processing Systems.
Wachter-Boettcher, S. (2017). Technically Wrong: Sexist
Apps, Biased Algorithms, and Other Threats of Toxic Tech: Yuste, R. et al. (2017). Four Ethical Priorities for
WW Norton & Company. Neurotechnologies and AI. Nature News, Nature
Publishing Group. www.nature.com/news/four-ethical-
Walden, J., Jung, E., Sundar, S., and Johnson, A. (2015). priorities-for-neurotechnologies-and-ai-1.22960
Mental models of robots among senior citizens: An
interview study of interaction expectations and design Zafar, M. B., Valera, I., Rodriguez, M., Gummadi, K.,
implications. Interaction Studies 16(1): 68–88. & Weller, A. (2017). From parity to preference-based
notions of fairness in classification. Paper presented
Wallach, W., & Allen, C. (2008). Moral machines: at Advances in Neural Information Processing Systems.
Teaching robots right from wrong. Oxford
University Press. Zarkadakis, G. (2015). In Our Own Image: Will artificial
intelligence save or destroy us? Random House.
Walsh, T. (2016). The singularity may never be near. arXiv
preprint arXiv:1602.06462. Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C.
(2013). Learning fair representations. Paper presented
Walsh, T. (2017). Android Dreams: The Past, Present and at the International Conference on Machine Learning.
Future of Artificial Intelligence. Oxford University Press.
Žliobaite, I. (2015). A survey on measuring indirect
Weller, A. (2017). Challenges for transparency. arXiv discrimination in machine learning. arXiv preprint
preprint arXiv:1708.01870. arXiv:1511.00148.

Weiskopf, D. (2008). Patrolling the mind’s boundaries. Žliobaite, I., Kamiran, F., & Calders, T. (2011). Handling
Erkenntnis, 68(2): 265–76. conditional discrimination. Paper presented at the
Data Mining (ICDM), 2011 IEEE 11th International
Whitfield, C. (2018). The Ethics of Artificial Intelligence. Conference on data mining.
PwC Australia.

Whittlestone, J., Nyrup, R., Alexandrova, A., and Cave,


S. (2019). The Role and Limits of Principles in AI
Ethics: Towards a Focus on Tensions. Forthcoming
in Proceedings of the AAAI/ACM Conference on
Artificial Intelligence, Ethics and Society.

Wheeler, M. (2010). Minds, Things, and Materiality.


In L. Malafouris and C. Renfrew. (eds.), The Cognitive
Life of Things: Recasting the Boundaries of the Mind.
Cambridge: McDonald Institute Monographs.
(Reprinted in J. Schulkin (ed.), Action, Perception and the
Brain: Adaptation and Cephalic Expression. Basingstoke:
Palgrave Macmillan.)
46

Appendices
Appendix 1: Summary of literature reviews

This report draws on a series than 1% of papers were directly most AI professionals haven’t had any
of literature reviews of how the related to ethical or societal impacts training at all in ethical issues. Since
ethical and societal implications of the technology. this space is so fast-moving, and there
of algorithms, data and AI have is no ‘standard’ route to becoming
been discussed across a range of In general, there seems be a “culture an AI professional, it’s not yet clear
academic disciplines, in policy and of disengagement” among technical exactly what should be covered in
civil society reports, and in popular researchers and engineers, who ethics training, or who exactly should
science and the media. The generally do not see ethical and societal receive it.
reviews cover over 100 academic questions raised by technology as
papers from disciplines including their responsibility.63 However, the last 1b. Philosophy and ethics
(but not limited to) computer 2-3 years have seen a growing interest, We surveyed a range of recent papers
science, ethics, human computer as illustrated, for example, by relevant listed on PhilPapers.org, the main
interaction, law, and philosophy, workshops and symposia at major index of English language philosophy
20 policy documents (at least conferences, the FAT/ML conference publications, under several headings
one from each of the seven and the IEEE Global Initiative on Ethics related to ethics in AI.
continents), over 25 of the most of Autonomous and Intelligent Systems.
commonly cited popular books, Most of these papers focus on
news and media articles, and Where technical research directly the moral significance of the kinds
several reports documenting addresses ethical and societal issues, of advanced AI technologies that
public engagement research.61 unsurprisingly it tends to focus on might exist in the future.64 Questions
those that can be framed or simplified explored include whether and how
Sections 1–3 in this appendix in technical terms: including how artificial agents might make moral
summarise the main observations we might explain or interpret the decisions, at what point (if ever)
from each of these literature reviews. decisions of ‘black box’ machine intelligent machines might have
Section 4 summarises a number of learning systems, how we can assess moral status normally accorded
overlapping themes that emerged. the reliability of different systems, to humans,65 and the implications
issues of privacy and data protection, of ‘superintelligent’ machines for our
1. Academic literatures and how we can build important concepts of autonomy and what it
values like ‘fairness’ into the use of means to be human.66
1a. Computer science and algorithms and AI systems.
machine learning Work on current technologies is
We covered papers published in 2017 We also covered several survey not widely discussed in the ethics
in the most relevant conferences and papers looking at the way AI and of AI literature. Instead, these are
journals in the areas of AI, machine data science professionals are trained usually covered under the headings
learning, data science, and data mining.62 in ethics. While there are ethical codes of ‘Information Ethics’ or ‘Computer
In most of the venues covered, less being developed for data scientists, Ethics’.67 Early papers in this field

61 All of the sources covered are referenced in the bibliography, and for each area of literature, we highlight the key references used.

62 Including papers from the following conferences: IJCAI, AAAI, NIPS, ICML, KDD, ICDM and the following journals: AIJ, JAIR, AIMag, AIRev, MLJ, JMLR, TKDD, TPAMI,
TIST, IDA.

63 Cech, (2014).

64 https://philpapers.org/browse/ethics-of-artificial-intelligence

65 See Boddington et al. (2017); Gunkel et al. (2014); Muller (2014); Wallach and Allen (2009); Allen,Varner and Zinser (2010); Anderson and Anderson (2007).

66 Bostrom (2003).

67 See for examples Dignum (2018) and Brynum (2015).


47

have highlighted the issues of how existing law can deal with liability the consequences of human-computer
accountability, bias and value-ladenness and accountability issues that arise with interaction will differ depending on
in data and algorithmic decision- increasing deployment of AI.70 the types of people affected, types
making. More recent work has started of technology used, and the contexts
to analyse how key concepts such As well as the question of what law is of the interaction. However, more
as ‘transparency’, ‘bias’, ‘fairness’ or needed around the use of AI, data, and attention could be given to the
‘responsibility’ apply to ADA-based algorithms, there is also a question of question of how each particular kind
technologies. This literature is usually how these technologies might impact of technology might affect different
published in specialist journals the legal process itself – changing the demographics differently, and how the
(e.g. Ethics and Information Technology, way that testimony and evidence are implications of these interactions may
Philosophy & Technology, Science given, for example. differ depending on the nature and
and Engineering Ethics), or often in context of the interaction.
proceedings of technical ADA fields. More than other fields, the legal
literature tries to pull apart different 1e. Political and social sciences
However, there seems to be a relative interpretations of ambiguous We surveyed the large and growing
lack of systematic work analysing the terms – like ‘privacy’ or ‘fairness’ – literature across politics, economics
ethical implications of current ADA and to discuss the implications of and social science that discusses how
technologies from a philosophical different meanings. algorithms, AI and data will impact
perspective. What literature exists society more broadly.
still seems to be relatively low profile 1d. Human-machine interaction
within the discipline of philosophy. Human-machine interaction (HMI) The main issues covered in this
is an interdisciplinary field combining literature include: how ADA will
1c. Law philosophy, psychology, cognitive impact economic growth and disrupt
The academic law literature we science, computer science, engineering the economy in general, the impact
covered mostly discusses questions and design. We looked at papers on jobs and the labour market more
of how the law (both existing and from prominent HMI journals specifically,72 and experts’ attempts to
potential) can be used to mitigate including Human-Computer Interaction, predict how automation will affect jobs,
risks from ADA-based technologies, Neuroethics, and AI and Society.71 how quickly, and what policy responses
particularly given rapid advances in to technological unemployment will be
AI and machine learning. Recurring ethical and social issues needed (including training, education,
discussed in this literature include: and redistribution schemes such as
Some of the key questions covered the psychological and emotional universal basic income). Another key
include: what existing regulation impacts of different human-computer issue is the impact of ADA on global
means in practice for the use of data interactions, as well as their impacts on inequality and prosperity, with concerns
and algorithms (e.g. to what extent different parts of wider society such being raised that technology many
does the GDPR mandate a ‘right as the economy and labour markets, widen the gap between developed
to explanation’, and what type of and concerns about human agency, and developing countries.73
explanation?);68 whether such regulation responsibility, autonomy, dignity, privacy,
can actually solve the problems it aims and responsibility. Finally, there is a discussion around
to (e.g. how much does a ‘right to how ADA will impact national and
explanation’ help with issues of privacy, One interesting aspect of this international politics: what politics,
personal autonomy, and trust?);69 and literature is some attention to how power, and democracy will look like

68 See, for example, Goodman and Flaxman (2016); Wachter, Mittelstadt, and Floridi (2017); Selbst and Powles (2017).

69 Edwards and Veale (2017).

70 Kuner et al. (2017).

71 Key papers include Becker (2006); Clark (2008); Jotterand and Dubljevic (2016); Menary (2007); Parens (1998); Schermer (2013); Sharkey (2014).

72 Frey (2017); Kaplan (2015); Marcus (2012); Marien (2014).

73 Eubanks (2018).
48

in an increasingly ADA-controlled by potential ‘superintelligence’ and It is not easy to measure how relevant
society,74 and how the use of particularly the challenge of aligning these concerns are becoming in terms
autonomous weapons and the risk advanced AI with human values, and of popularity, but some indicators can
of an AI ‘arms race’ might threaten (b) the future of work and potential help us get a rough idea of the relative
international stability. disruption of automation. Other issues growth. For instance, figure 4 shows
covered less prominently include the percentage of documents from
1f. Other cross-disciplinary whether a machine can be held the ‘AI topics’ database related to
literature responsible for its actions or given ethics of AI or its impacts.
Finally, we looked at how ethical rights, and how to prevent the use of
issues around ADA are discussed at big data from increasing inequality and A similar perspective can be seen
a cross-disciplinary level in publications leading to abuses of power.76 from the articles in the New York
spanning philosophy, law, and Times, as analysed by Fast and Horvitz
machine learning.75 2b. Popular media (2017), who find that AI discussion
Popular media articles have recently has exploded since 2009, but levels of
There is a substantial amount of begun to focus much more on pessimism and optimism have remained
cross-citation across these different the risks posed by current uses balanced.
fields, and focus on a shared set of key of algorithms and AI, particularly
concepts, particularly those of fairness, the potential for bias against 3. Policy reports and wider
accountability, and transparency, but underrepresented communities, and international landscape
also bias, discrimination, explainability, threats to personal privacy from the
privacy, and security. use of large amounts of personal data.77 Various organisations and government

However, these key terms are often Ethics and impact of AI


used unreflectively, in different and 25
inconsistent ways, and without
much further analysis – for example
10
proposing methods to increase
‘transparency’ without clarifying what
Documents (%)

this means or why it is important. 15

2. Popular science and media 10

We surveyed how ethical and social


issues relating to algorithms, data and 5

AI have been discussed in the media


and in popular science literature, as 0
these issues have received increasing
public attention. 1980 2000 2020
Year
2a. Popular science books
Looking at a range of popular science Figure 4. Percentage of articles (green dots) in the ‘AI topics’ having at least
books on artificial intelligence, we one keyword related to ethics and impact of AI. General tendencies (black line)
found that two topics arose particularly and standard errors (grey band) are also shown. The methodology is explained
prominently: (a) the risks posed in full in Martínez-Plumed et al. “The Facets of AI”, IJCAI 2018.

74 See for example Monbiot (2017) and Helbing et al. (2017).

75 See for example Lipton (2017), Weller (2017), Binns (2018), Tene and Polonetsky (2017), and Selbst and Barocas (2016).

76 Key books covered include Barrat (2013); Bostrom (2014); Brynjolfsson and McAfee (2014); Chace (2015); Collins (2108); Eubanks (2018); Harari (2015); Kurzweil
(2012); McFarland (2008); Moravec (1988); Noble (2018); O’Connell (2017); O’Neil (2016); Shanahan (2015); Tegmark (2017); Wachter-Boettcher (2017); Wallach
and Allen (2009); Walsh (2017); Zarkadakis (2015).

77 See for example Crawford (2016); Li (2018); Shariff, Rahwan and Bonnefon (2016); Mukherjee (2017); Kleinberg, Ludwig and Mullainathan (2016).
49

institutions in the UK, US, and across into agenda setting and decision- too found that public awareness
the world have also begun mapping making on various matters of policy, of data science is limited.81
out the ethical and societal questions science, medicine, and technology.
posed by the use of algorithms, They involve a variety of methods • The Wellcome Trust (2016)
data and AI, with a focus on identifying of polling, survey, consultation and queried the public acceptability
their practical implications for policy. citizen fora. So far several survey of commercial access to patient
We looked at how these policy- initiatives have mapped aspects data, by means of a large public
focused reports discuss the issues of ADA understanding, acceptance, dialogue. This was followed up
in this space, with a particular focus and concerns surrounding AI in the by a quantitative survey. Their
on any international differences.78 UK, including: work revealed that without
a clear public benefit, the public
These policy-focused reports naturally • Ipsos MORI and the Royal are very concerned about the
look at a very wide range of issues, Society (2016/2017) carried idea of commercial access to
covering most of those we found out the first public dialogues healthcare data.82
in different parts of the academic on machine learning in the UK,
literature. There was particular looking at public attitudes in • The Health Research Authority
focus on the following issues: data health, social care, marketing, and Human Tissue Authority
management and use, fairness, bias, transport, finance, policing, crime, facilitated three location
statistical stereotyping, transparency, education, and art. The initiative dialogues with public and scientist
interpretability, responsibility, made use of dialogue discussions stakeholders about consent to
accountability, the future of work, and surveys, and revealed that sharing data and tissue in a world
and economic impact. only 9% of the people surveyed of genomics. Participants expressed
had heard of machine the worry that by giving consent
However, reports from different learning before.79 now to future uses of their data,
parts of the world focused their if legislation changes they might
attention on different issues. In • The RSA (2017) looked at ways be unwittingly contributing to
developed countries, there is greater to engage citizens in the ethical a two-tier society where people
focus on the safe deployment and deployment of AI technologies, can be discriminated against
potential risks of technology than in and found that very few people based on their genome.83
other parts of the world. In developing know the extent to which
countries, the conversation focuses automated decision-making • The Academy of Medical Sciences
more on building capacity and influences their lives.80 (forthcoming 2018) is engaged
an ecosystem around technology in public and stakeholder dialogue
and research. • The Cabinet Office (2016) on the role of beneficial AI
investigated how the public weigh in healthcare.
4. Public engagement research up risk around machine learning and
AI when applied to administrative • DeepMind (2018) organized
Public engagement is an attempt to decisions, by means of a conjoint a public and stakeholder
involve members of different publics survey and a data game. They engagement initiative to develop

78 Including EU EDPS Advisory Group (2018); Future Advocacy and the Wellcome Trust (2018); Government Office for Science (2016); IEEE (2018); Omidyar
Network and Upturn (2018); National Science and Technology Council (2016); Royal Society (2017); Royal Society and the British Academy (2017); Select
Committee on Artificial Intelligence (2018).

79 Ipsos MORI and the Royal Society, (2016); Ipsos MORI and the Royal Society. (2017).

80 Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA), (2018).

81 Cabinet Office. Public dialogue on the ethics of data science in government. (2016). www.ipsos.com/sites/default/files/2017-05/data-science-ethics-in-government.pdf

82 Wellcome Trust. Public attitudes to commercial access to patient data. (2016). www.ipsos.com/sites/default/files/publication/5200-03/sri-wellcome-trust-
commercial-access-to-health-data.pdf

83 www.hra.nhs.uk/about-us/what-we-do/how-involve-public-our-work/what-patients-and-public-think-about-health-research/
50

the principles and values which • Implications of ADA for human


should drive its behaviour globally.84 agency, autonomy, and dignity.

5. Summary • Impact of ADA on the economy


and economic growth.
At a general level, we identified the
following commonly discussed issues • Impact of ADA on jobs and labour
emerging from these literatures: markets, developing policies around
technological unemployment.
• Ensuring that ‘black box’ algorithms
and AI systems are transparent / • Impact of ADA on global inequality.
explainable / interpretable.
• Impact of ADA on national politics,
• Ensuring that uses of ADA are public opinion, and democracy –
reliable and robust. including how unemployment
might lead to disruptive changes
• Maintaining individual privacy and in public opinion.
protection of personal data.
• How ADA changes power
• Ensuring algorithms and AI systems in a society.
are used fairly and do not reflect
historical bias, or lead to new forms • How ADA might be used to direct
of bias or discrimination. our attention or manipulate
opinions (e.g. for political or
• Ensuring algorithms and AI reflect commercial purposes).
human values more broadly.
• Impact of ADA on international
• The question of whether AI systems relations, conflict, and security –
can ever make moral decisions. including impact of autonomous
weapons and risk of a global
• The question of whether AI systems arms race.
might ever attain moral status.
• What new methods of global
• Issues of accountability, governance might be needed to
responsibility and liability around deal with the challenges posed by
the use of ADA. increasingly powerful technologies.

• The role of ethics statements


or ethical training in ensuring
responsible use of ADA.

• The role of law and regulation


in mitigating the risks and ensuring
the benefits of AI.

• Building appropriate levels


of trust among humans and
machines algorithms.

84 https://deepmind.com/applied/deepmind-health/transparency-independent-reviewers/developing-our-values/#image-27248
51

Appendix 2: Groupings and principles


Below we list some of the common should figure in Cowls and Floridi’s list. 3. Governance and accountability
ways that organisations have so far Furthermore, systematic frameworks 4. AI morality and values
grouped and structured issues relating of this kind generally presuppose 5. Managing AI risk, misuse, and
to ADA, as well as various sets of a judgment of what the fundamental unintended consequences
principles that have been proposed. values are that should structure 6. AI and the world’s
the framework. This can be useful complex challenges
One dividing line is between those in contexts where there is a prior
approaches that give fairly long lists commitment to such a set of values The Partnership on AI uses a
of principles (Asilomar, Partnership (as one may be able to do with regards breakdown into six ‘thematic pillars’:
on AI), and those that use as few as to the ‘European values’ in the context
four categories (e.g. the AI Now 2017 of the European Union), but agreement 1. Safety-critical AI
Report). There are advantages and on such value judgments cannot be 2. Fair, transparent, and accountable AI
disadvantages to both: the shorter lists universally presumed. 3. Collaborations between people and
can be useful for providing a simple AI systems
overview of a complex field, but risk Different ways of carving up the space 4. AI, labor, and the economy
conflating importantly different issues of ADA ethics and societal impacts can 5. Social and societal influences of AI
or leaving out important themes. They serve different purposes – for example 6. AI and social good
can only aim to be comprehensive providing an overview, capturing
by making the categories broad all relevant issues, or providing The EDPS Ethics Advisory Group
and unspecific. Longer lists, on the practically relevant advice. The different highlights seven key ‘socio-cultural shifts
other hand, can aim to be more frameworks surveyed below can all of the digital age’:
comprehensive and to capture more be useful for these purposes. It is
nuance, but risk losing a clear overview, doubtful that a single framework could 1. From the individual to the
and may include categories that overlap capture the entire field and serve all digital subject
in non-perspicuous ways. purposes, and this is neither necessary 2. From analogue to digital life
nor sufficient for making constructive 3. From governance by institutions
A strategy for trying to balance progress on these issues (although, of to governmentality through data
these two approaches is to draw on course, it might be useful for any given 4. From a risk society to scored society
a broader analytical framework. For organisation or community to settle on 5. From human autonomy to the
example, the EDPS Ethics Advisory such principles). Instead, efforts to map convergence of humans and
Group propose to derive the most and organise the relevant issues should machines
important issues from eight ‘European’ be understood as contextually useful 6. From individual responsibility
values, while Cowls and Floridi (2018) tools for specific purposes. to distributed responsibility
propose that all relevant issues can be 7. From criminal justice to
captured as resulting from technologies 1. Common ways of pre-emptive justice
being either overused, misused or organising issues
underused relative to ‘four fundamental And consider the impact of digital
points in the understanding of human The AI Now 2017 Report identifies technologies on the following values:
dignity and flourishing: who we can four focus areas:
become (autonomous self-realisation); 1. Dignity
what we can do (human agency); what 1. Labor and Automation 2. Freedom
we can achieve (societal capabilities); 2. Bias and Inclusion 3. Autonomy
and how we can interact with each other 3. Rights and Liberties 4. Solidarity
and the world (societal cohesion).’ (p.1). 4. Ethics and Society 5. Equality
6. Democracy
While these frameworks can strike DeepMind Ethics and Society split 7. Justice
a balance between complexity and their work into six research themes: 8. Trust
systematicity, they still carry the risk of
leaving out or downplaying some issues. 1. Privacy, transparency, and fairness
For instance, it is not immediately clear 2. Economic impact, inclusion,
where issues of bias and discrimination and equality
52

The Obama White House report, • Failure Transparency: If an AI system be shared broadly, to benefit all
‘Preparing for the future of artificial causes harm, it should be possible of humanity.
intelligence’ divides its discussion into to ascertain why.
the following sections: • Human Control: Humans
• Judicial Transparency: Any should choose how and
1. Applications of AI for public good involvement by an autonomous whether to delegate decisions
2. AI in government system in judicial decision-making to AI systems, to accomplish
3. AI and regulation should provide a satisfactory human-chosen objectives.
4. Research and workforce explanation auditable by a
5. AI, automation, and the economy competent human authority. • Non-subversion: The power
6. Fairness, safety, and governance conferred by control of highly
7. Global considerations and security • Responsibility: Designers and advanced AI systems should
builders of advanced AI systems respect and improve, rather
The Royal Society and British are stakeholders in the moral than subvert, the social and civic
Academy joint report (2017) implications of their use, misuse, processes on which the health
uses the following categories: and actions, with a responsibility of society depends.
and opportunity to shape
1. Safety, security, prevention of harm those implications. • AI Arms Race: An arms race
2. Human moral responsibility in lethal autonomous weapons
3. Governance, regulation, • Value Alignment: Highly autonomous should be avoided.
monitoring, testing, certification AI systems should be designed so
4. Democratic decision-making that their goals and behaviors can Partnership on AI members ‘believe
5. Explainability and transparency be assured to align with human in and endeavour to uphold the
values throughout their operation. following tenets’:
The European Group on Ethics
in Science and New Technologies • Human Values: AI systems should 1. We will seek to ensure AI
presents the following breakdown be designed and operated so as technologies benefit and empower
of issues: to be compatible with ideals of as many people as possible.
human dignity, rights, freedoms,
1. Privacy and consent and cultural diversity. 2. We will educate and listen to
2. Fairness and statistical stereotyping the public and actively engage
3. Interpretability and transparency • Personal Privacy: People should stakeholders to seek their feedback
4. Responsibility and accountability have the right to access, manage on our focus, inform them of our
5. Personalisation, bubbles, and and control the data they generate, work, and address their questions.
manipulation given AI systems’ power to analyze
6. Power asymmetries and inequalities and utilize that data. 3. We are committed to open
7. Future of work and the economy research and dialogue on the
8. Human-machine interaction • Liberty and Privacy: The application ethical, social, economic and
of AI to personal data must not legal implications of AI.
2. Principles and codes85 unreasonably curtail people’s real
or perceived liberty. 4. We believe that AI research
The Asilomar AI Principles include the and development efforts need
following ‘ethics and values’ principles:86 • Shared Benefit: AI technologies to be actively engaged with and
should benefit and empower accountable to a broad range
• Safety: AI systems should be safe and as many people as possible. of stakeholders.
secure throughout their operational
lifetime, and verifiably so where • Shared Prosperity: The economic 5. We will engage with and have
applicable and feasible. prosperity created by AI should representation from stakeholders

85 These summarise the key aspects of various principles and codes, but do not necessarily represent the principles in full – sometimes just using the title and not
the full explanation of each principle, for example.

86 The full Asilomar Principles include a further ten principles on ‘research issues’ and ‘longer-term issues’ which we do not include here.
53

in the business community to among AI scientists and engineers to 3. Accountability


help ensure that domain-specific help us all better achieve these goals. 4. Explanation
concerns and opportunities are 5. Data Provenance
understood and addressed. The Lords Select Committee on 6. Auditability
AI report suggests five principles 7. Validation and Testing
6. We will work to maximise the for a cross-sector AI code:
benefits and address the potential The Japanese Society for Artificial
challenges of AI technologies, by: 1. AI should be developed for Intelligence (JSAI) Ethical Guidelines:88
the common good and benefit
a. Working to protect the privacy of humanity. 1. Contribution to humanity
and security of individuals. 2. Abidance of laws and regulations
2. AI should operate on principles 3. Respect for the privacy of others
b. Striving to understand and of intelligibility and fairness. 4. Fairness
respect the interests of all 5. Security
parties that may be impacted 3. AI should not be used to diminish 6. Act with integrity
by AI advances. the data rights or privacy of 7. Accountability and Social
individuals, families, or communities. Responsibility
c. Working to ensure that AI 8. Communication with society
research and engineering 4. All citizens should have the right and self-development
communities remain socially to be educated to enable them to 9. Abidance of ethics guidelines by AI
responsible, sensitive, and flourish mentally and economically
engaged directly with the alongside artificial intelligence. The Future Society’s Science, Law and
potential influences of AI Society Initiative – Principles for the
technologies on wider society. 5. The autonomous power to hurt, Governance of AI: 89
destroy, or deceive human beings
d. Ensuring that AI research and should never be vested in AI. 1. AI shall not impair, and, where
technology is robust, reliable, possible, shall advance the equality
trustworthy, and operates within The IEEE Standards Association in rights, dignity, and freedom to
secure constraints. has also developed a set of general flourish of all humans.
principles to guide ethical governance
e. Opposing development and use of ‘autonomous and intelligent systems’: 2. AI shall be transparent.
of AI technologies that would
violate international conventions 1. Human rights 3. Manufacturers and operators
or human rights, and promoting 2. Prioritising well-being of AI shall be accountable.
safeguards and technologies that 3. Accountability
do no harm. 4. Transparency 4. AI’s effectiveness shall be
5. Technology misuse and measurable in the real-world
7. We believe that it is important for awareness of it applications for which it is intended.
the operation of AI systems to be
understandable and interpretable by The Association for Computing 5. Operators of AI systems shall
people, for purposes of explaining Machinery (ACM)’s ‘Principles have appropriate competencies
the technology. for Algorithmic Transparency and and expertise.
Accountability’:87
8. We strive to create a culture of 6. The norms of delegation of
cooperation, trust, and openness 1. Awareness decisions to AI systems shall
2. Access and redress

87 See the ASM US Public Policy Council’s ‘Statement on Algorithmic Transparency and Accountability’ (2017)

88 http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf

89 www.thefuturesociety.org/science-law-society-sls-initiative/#1516790384127-3ea0ef44-2aae
54

be codified through thoughtful,


inclusive dialogue with civil society.

UNI Global Union’s ‘Top 10 Principles


for Ethical Artificial Intelligence’:90

1. Demand that ai systems


are transparent.
2. Equip ai systems with an
‘ethical black box’.
3. Make AI serve people and planet.
4. Adopt a human-in-command
approach.
5. Ensure a genderless, unbiased AI.
6. Share the benefits of AI systems.
7. Secure a just transition and ensuring
support for fundamental freedoms
and rights.
8. Establish global governance
mechanisms.
9. Ban the attribution of responsibility
to robots.
10.Ban AI arms race.

The Montréal Declaration for


Responsible AI91 consists of the
following principles:

1. Well-being
2. Respect for autonomy
3. Protection of privacy and intimacy
4. Solidarity
5. Democratic participation
6. Equity
7. Diversity inclusion
8. Prudence
9. Responsibility
10.Sustainable development

90 www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf

91 www.montrealdeclaration-responsibleai.com/the-declaration
55

Appendix 3: Different perspectives


The entire space of ethical and societal Environment 2. Which level of social
impacts of ADA is large, highly complex organisation?
and unlikely to be captured within a Development
single unified framework (cf. section 2 Human relations are structured into
and appendix 2). This makes it difficult Trade a number of different levels of social
to understand individual issues without organisation, from the local community
first zooming-in and filtering out some International and institutional to global international relations.
information. Conversely, it is easy for relations In addition to looking at issues by
important aspects or dimensions of governmental sector, one can focus on
a given issue to be overlooked or get Transport issues that arise at different levels of
lost in the complexity of the space. social organisation (figure 5).
Work and labour
This appendix outlines a number 3. Which time-frame?
Health and social care
of salient perspectives one can take
on the space of ADA ethical and Finance and the economy Issues relating to ADA may emerge
societal importance, with examples at different time-scale. For instance,
of the subdivisions one might make Security and defence we may distinguish:
within each. These can be thought
of as axes, along which one can Community and housing 1. Present challenges: What are the
zoom-in on different parts of the challenges we are already aware
space to filter out information. These Crime and justice of and already facing today?
perspectives can be used singly or
in combination to either restrict the
range of issues considered, or to think
through a single issue from several Global
different perspectives to ensure that
as many relevant aspects as possible
are considered. International

1. Which sectors or parts National


of society?

Societies consist of a number of mmunity


Co
trade

different sectors or parts, as reflected


confli

in the ways governments divide


ct
economy

politics

their administrations into different Individual


demo

hic

departments. Using the US and


r ap
gra

UK government departments as
og

hic ge
p

a template, one might for example


focus on how ADA impacts:
m e d ia
tio
n

se
Agriculture cu
rit e ra
y co op

Business

Culture, media, and sports

Energy

Education Figure 5. Different levels and parts of society which may be impacted by ADA-
based technologies.
56

2. Near-future challenges: What • Marginalised groups: Is this is applied in society (e.g. impact
challenges might we face in the near technology a threat or an of automation on the labour market,
future, assuming current technology? opportunity given our precarious issues of liability as algorithms are
status? How can we use it to used to make decisions in healthcare
3. Long-run challenges: What fight prejudice? and other important areas).
challenges might we face in the
longer-run, as technology becomes • Organisations and corporate • Consequences of failure to
more advanced? bodies: What are the costs and perform or accidents (e.g. self-
benefits of automating a given driving car accidents).
Thinking about challenges in the service/task? What do we need
first category is the easiest, as they to think about to ensure our use • Malicious use of technology
are ones that are currently in front of technology is ethical/trustworthy? (e.g. for surveillance, manipulation
of us and discussed: ensuring the or crime).
privacy of personal data, for example. • Policymakers and regulators:
Challenges in the second category Where is policy or regulation 6. What type of solution?
require more thought, to imagine needed? Where is there public
how current technologies might pose pressure to act? We have many different mechanisms
new challenges in future. An example and methods at our disposal for
might be how current image synthesis • NGOs and civil society: How addressing the various ethical and
techniques could be put to malicious can we ensure widespread public social challenges arising from the use
use. Challenges in the third category engagement on these issues? of ADA. Different types of solution
are the most difficult to forecast, since Whose interests might not be will be needed for different types
they require thinking about the impact being represented? of problem, and multiple different
of future technological capabilities. approaches will be needed to solve
Discussion of the potential impacts • Researchers: What intellectually most challenges. Thinking more
of superintelligent AI would fall into interesting questions do the systematically about the different
this category. use of new technologies raise? methods available for tackling these
What issues require deeper problems could help to identify
4. Which publics? intellectual thought? new approaches and angles.

Different publics concern themselves • Journalists and communicators: For example, we might break down
with different problems and have What aspects of technology and different types of solution as follows:
different perspectives on the same their impacts on society most need
issues. The following distinctions to be communicated to different • Law and regulation: national
between publics and their publics? How can these issues be and international.
corresponding questions about ADA communicated most effectively? • Government policy.
technologies can inform how one • Public engagement and education.
organises inquiry into moral relevance 5. What type of challenge? • Activism.
of these technologies: • Different types of research:
When a piece of technology goes –– Technical research
• Designers and engineers: What wrong it can do so for different –– Philosophy/ethics/
responsibilities do I have to ensure reasons. We might consider different humanities research
the technology I am developing types of challenges that can arise from: –– Social science research.
is ethical? What ethical standards
need to be met? How can demands • How technology is designed or
of technology like privacy, fairness, developed (e.g. what biases might
and transparency be made exist in the data used to train an
technically precise? algorithm, inability to examine how
exactly an algorithm uses different
• Users/general public: How does features to make a decision).
a given technology impact my
day-to-day life? What new trade- • Externalities or unintended
offs does it introduce for me? consequences of how technology
57

Published by the Nuffield Foundation, 28 Bedford Square, London WC1B 3JS


Copyright © Nuffield Foundation 2019

This report is also available to download at www.nuffieldfoundation.org

Designed by Soapbox
www.soapbox.co.uk

You might also like