AI4People's Ethical Framework For A Good AI Society: Opportunities, Risks, Principles, and Recommendations
AI4People's Ethical Framework For A Good AI Society: Opportunities, Risks, Principles, and Recommendations
Framework for
a Good AI Society:
Opportunities, Risks,
Principles, and
Recommendations
AI IS NOT M E R E LY
TO BE REGULATED
O N LY ONCE IT IS MATURE.
IT IS A POWERFUL FORCE
THAT IS RESHAPING
Luciano Floridi
Chairman, Scientific Committee AI4People,
Professor of Philosophy and Ethics
of Information and Director of the
Digital Ethics Lab at Oxford University.
A B ST R A C T
1. Introduction ....................................................................................................................................................... 8
2.1 Who we can become: enabling human self-realisation, without devaluing human abilities ................ 10
2.2 What we can do: enhancing human agency, without removing human responsibility .......................... 11
2.3 What we can achieve: increasing societal capabilities, without reducing human control ..................... 12
2.4 How we can interact: cultivating societal cohesion, without eroding human self-determination ...... 12
4.1 Beneficence: promoting well-being, preserving dignity, and sustaining the planet ............................... 16
4.2 Non-maleficence: privacy, security and “capability caution” .......................................................................... 17
4.3 Autonomy: the power to decide (whether to decide) ..................................................................................... 18
4.4 Justice: promoting prosperity and preserving solidarity .................................................................................. 19
4.5 Explicability: enabling the other principles through intelligibility and accountability .......................... 20
References ............................................................................................................................................................ 30
Authors
Luciano Floridi1,2, Josh Cowls1,2, Monica Beltrametti3, Raja Chatila4,5, Patrice Chazerand6, Virginia Dignum7, 8, Christoph Luetge9,
Robert Madelin10, Ugo Pagallo11, Francesca Rossi12,13, Burkhard Schafer14, Peggy Valcke15,16, and Effy Vayena17.
1
Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.
2The Alan Turing Institute, London, United Kingdom.
3Naver Corporation, Grenoble, France.
4French National Center of Scientific Research, France.
5Institute of Intelligent Systems and Robotics at Pierre and Marie Curie University, Paris, France.
6Digital Europe, Brussels, Belgium.
7University of Umeå, Umeå, Sweden.
8Delft Design for Values Institute, Delft University of Technology, Delft, the Netherlands.
9TUM School of Governance, Technical University of Munich, Munich, Germany.
10
Centre for Technology and Global Affairs, University of Oxford, Oxford, United Kingdom.
11
Department of Law, University of Turin, Turin, Italy.
12
IBM Research, United States.
13
University of Padova, Padova, Italy.
14
University of Edinburgh Law School, Edinburgh, United Kingdom.
15
Centre for IT & IP Law, Catholic University of Leuven, Flanders, Belgium.
16
Bocconi University, Milan, Italy.
17
Bioethics, Health Ethics and Policy Lab, ETH Zurich, Zurich, Switzerland.
5
A I 4 P
I N B R I E F
E X E C U TIV E S U M M A R Y
This White Paper reports the findings of AI4People, an Atomium – EISMD initiative
designed to lay the foundations for a “Good AI Society” through the creation of an
ethical framework. This document was produced by the Scientific Committee of
AI4People.
1
I N T R O D U C TI O N
AI is not another utility that needs to be regulated once it is mature. It is a powerful
force, a new form of smart agency, which is already reshaping our lives, our interactions,
and our environments.
AI4People was set up to help steer this powerful force towards the good of society,
everyone in it, and the environments we share. This White Paper is the outcome of the
collaborative effort by the AI4People Scientific Committee—comprising 12 experts and
chaired by Luciano Floridi1—to propose a series of recommendations for the development
of a Good AI Society.
The White Paper synthesises three things: the opportunities and associated risks that
AI technologies offer for fostering human dignity and promoting human flourishing; the
principles that should undergird the adoption of AI; and twenty specific recommendations
that, if adopted, will enable all stakeholders to seize the opportunities, to avoid or at least
minimise and counterbalance the risks, to respect the principles, and hence to develop a
Good AI Society.
The White Paper is structured around four more sections after this introduction.
Section 2 states the core opportunities for promoting human dignity and human
flourishing offered by AI, together with their corresponding risks.2 Section 3 offers a
brief, high-level view of the advantages for organisations of taking an ethical approach
to the development and use of AI. Section 4 formulates 5 ethical principles for AI,
building on existing analyses, which should undergird the ethical adoption of AI in
society at large. Finally, Section 5 offers 20 recommendations for the purpose of
developing a Good AI Society in Europe.
Since the launch of AI4People in February 2018, the Scientific Committee has
acted collaboratively to develop the recommendations in the final section of this paper.
Through this work, we hope to have contributed to the foundation of a Good AI Society
we can all share.
1 Besides Luciano Floridi, the members of the Scientific Committee are: Monica Beltrametti, Raja Chatila, Patrice Chazerand,
Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy
Vayena. Josh Cowls is the rapporteur. Thomas Burri contributed to an earlier draft.
2 The analysis in this and the following two sections is also available in Cowls and Floridi (2018). Further analysis and
more information on the methodology employed will be presented in Cowls and Floridi (Forthcoming).
9
2
T H E O P P O R T U N ITI ES
A N D R IS K S
O F A I FO R S O C I E TY
That AI will have a major impact on society is no longer in question. Current debate
turns instead on how far this impact will be positive or negative, for whom, in which
ways, in which places, and on what timescale. Put another way, we can safely dispense
with the question of whether AI will have an impact; the pertinent questions now are by
whom, how, where, and when this positive or negative impact will be felt.
These dangers arise largely from unintended consequences and relate typically to
good intentions gone awry. However, we must also consider the risks associated with
inadvertent overuse or wilful misuse of AI technologies, grounded, for example, in
misaligned incentives, greed, adversarial geopolitics, or malicious intent. Everything
from email scams to full-scale cyber-warfare may be accelerated or intensified by the
malicious use of AI technologies (Taddeo, 2017). And new evils may be made possible
(King et. al, 2018). The possibility of social progress represented by the aforementioned
opportunities above must be weighed against the risk that malicious manipulation will
be enabled or enhanced by AI. Yet a broad risk is that AI may be underused out of fear
of overuse or misuse. We summarise these risks in Figure A below, and offer a more
detailed explanation in the text that follows.
10
Figure A: Overview of the four core opportunities offered by AI, four corresponding risks, and the
opportunity cost of underusing AI.
Yet the relationship between the degree and quality of agency that people enjoy and
how much agency we delegate to autonomous systems is not zero-sum, either pragmatically
or ethically.
Increasingly, we may not need to be either ‘in or on the loop’ (that is, as part of
the process or at least in control of it), if we can delegate our tasks to AI. However, if
we rely on the use of AI technologies to augment our own abilities in the wrong way,
we may delegate important tasks and above all decisions to autonomous systems that
should remain at least partly subject to human supervision and choice. This in turn may
reduce our ability to monitor the performance of these systems (by no longer being ‘on
the loop’ either) or preventing or redressing errors or harms that arise (‘post loop’). It
is also possible that these potential harms may accumulate and become entrenched, as
more and more functions are delegated to artificial systems. It is therefore imperative to
strike a balance between pursuing the ambitious opportunities offered by AI to improve
human life and what we can achieve, on the one hand, and, on the other hand, ensuring
that we remain in control of these major developments and their effects.
need to decide between engineering the climate directly and designing societal frameworks
to encourage a drastic cut in harmful emissions. This latter option might be undergirded
by an algorithmic system to cultivate societal cohesion. Such a system would not be
imposed from the outside; it would be the result of a self-imposed choice, not unlike our
choice of not buying chocolate if we had earlier chosen to be on a diet, or setting up an
alarm clock to wake up. “Selfnudging” to behave in socially preferable ways is the best
form of nudging, and the only one that preserves autonomy. It is the outcome of human
decisions and choices, but it can rely on AI solutions to be implemented and facilitated.
Yet the risk is that AI systems may erode human selfdetermination, as they may lead to
unplanned and unwelcome changes in human behaviours to accommodate the routines
that make automation work and people’s lives easier. AI’s predictive power and relentless
nudging, even if unintentional, should be at the service of human selfdetermination and
foster societal cohesion, not undermining of human dignity or human flourishing.
Taken together, these four opportunities, and their corresponding challenges, paint
a mixed picture about the impact of AI on society and the people in it. Accepting the
presence of trade-offs, seizing the opportunities while working to anticipate, avoid, or
minimise the risks head-on will improve the prospect for AI technologies to promote
human dignity and flourishing. Having outlined the potential benefits to individuals and
society at large of an ethically engaged approach to AI, in the next section we highlight
the “dual advantage” to organisations of taking such an approach.
14
3
T H E D U A L A D V A N TA G E
O F A N E T H I C A L
A P P R O A C H T O A I
Ensuring socially preferable outcomes of AI relies on resolving the tension between
incorporating the benefits and mitigating the potential harms of AI, in short, simultaneously
avoiding the misuse and underuse of these technologies. In this context, the value of an
ethical approach to AI technologies comes into starker relief. Compliance with the law
is merely necessary (the leas that is required), but significantly insufficient (not the most
than can be done) (Floridi, 2018). With an analogy, it is the difference between playing
according to the rules, and playing well, so that one may win the game. Adopting an
ethical approach to AI confers what we define here as a “dual advantage”. On one side,
ethics enables organisations to take advantage of the social value that AI enables. This is
the advantage of being able to identify and leverage new opportunities that are socially
acceptable or preferable. On the other side, ethics enables organisations to anticipate and
avoid or at least minimise costly mistakes. This is the advantage of prevention and
mitigation of courses of action that turn out to be socially unacceptable and hence
rejected, even when legally unquestionable. This also lowers the opportunity costs of
choices not made or options not grabbed for fear of mistakes.
Ethics’ dual advantage can only function in an environment of public trust and
clear responsibilities more broadly. Public acceptance and adoption of AI technologies
will occur only if the benefits are seen as meaningful and risks as potential, yet preventable,
minimisable, or at least something against which one can be protected, through risk
management (e.g. insurance) or redressing. These attitudes will depend in turn on
public engagement with the development of AI technologies, openness about how they
operate, and understandable, widely accessible mechanisms of regulation and redress. In
this way, an ethical approach to AI can also be seen as an early warning system against
risks which might endanger entire organisations. The clear value to any organisation of
the dual advantage of an ethical approach to AI amply justifies the expense of engagement,
openness, and contestability that such an approach requires.
15
4
A U N I FI E D F R A M E W O R K
O F P R I N C I P L ES
FO R A I I N S O C I E TY
AI4People is not the first initiative to consider the ethical implications of AI. Many
organisations have already produced statements of the values or principles that should
guide the development and deployment of AI in society. Rather than conduct a similar,
potentially redundant exercise here, we strive to move the dialogue forward, constructively,
from principles to proposed policies, best practices, and concrete recommendations for
new strategies. Such recommendations are not offered in a vacuum. But rather than
generating yet another series of principles to serve as an ethical foundation for our
recommendations, we offer a synthesis of existing sets of principles produced by various
reputable, multi-stakeholder organisations and initiatives. A fuller explanation of the
scope, selection and method of assessing these sets of principles is available in Cowls and
Floridi (Forthcoming). Here, we focus on the commonalities and noteworthy differences
observable across these sets of principles, in view of the 20 recommendations offered in
the rest of the paper. The documents we assessed are:
1. the Asilomar AI Principles, developed under the auspices of the Future of Life
Institute, in collaboration with attendees of the high-level Asilomar conference
of January 2017 (hereafter “Asilomar”; Asilomar AI Principles, 2017);
2. the Montreal Declaration for Responsible AI, developed under the auspices of
the University of Montreal, following the Forum on the Socially Responsible
Development of AI of November 2017 (hereafter “Montreal”; Montreal
Declaration, 2017);3
3. the General Principles offered in the second version of Ethically Aligned Design:
A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.
This crowd-sourced global treatise received contributions from 250 global
thought leaders to develop principles and recommendations for the ethical
development and design of autonomous and intelligent systems, and was
published in December 2017 (hereafter “IEEE”; IEEE, 2017);4
4. the Ethical Principles offered in the Statement on Artificial Intelligence, Robotics
and ‘Autonomous’ Systems, published by the European Commission’s European
Group on Ethics in Science and New Technologies, in March 2018 (hereafter
“EGE”; EGE, 2018);
3 The Montreal Declaration is currently open for comments as part of a redrafting exercise. The principles we refer to
here are those which were publicly announced as of 1st May, 2018.
4 The third version of Ethically Aligned Design will be released in 2019 following wider public consultation.
16
5. the “five overarching principles for an AI code” offered in paragraph 417 of the
UK House of Lords Artificial Intelligence Committee’s report, AI in the UK:
ready, willing and able?, published in April 2018 (hereafter “AIUK”; House of
Lords, 2018); and
6. the Tenets of the Partnership on AI, a multistakeholder organisation consisting
of academics, researchers, civil society organisations, companies building and
utilising AI technology, and other groups (hereafter “the Partnership”;
Partnership on AI, 2018).
Of all areas of applied ethics, bioethics is the one that most closely resembles
digital ethics in dealing ecologically with new forms of agents, patients, and environments
(Floridi, 2013). The four bioethical principles adapt surprisingly well to the fresh ethical
challenges posed by artificial intelligence. But they are not exhaustive. On the basis of
the following comparative analysis, we argue that one more, new principle is needed in
addition: explicability, understood as incorporating both intelligibility and accountability.
Montreal and IEEE principles both use the term “well-being”: for Montreal, “the
development of AI should ultimately promote the well-being of all sentient creatures”;
while IEEE states the need to “prioritize human well-being as an outcome in all system
designs”. AIUK and Asilomar both characterise this principle as the “common good”: AI
5 Of the six documents, the Asilomar Principles offer the largest number of principles with arguably the broadest scope. The
23 principles are organised under three headings, “research issues”, “ethics and values”, and “longer-term issues”. We have
omitted consideration of the five “research issues” here as they are related specifically to the practicalities of AI development,
particularly in the narrower context of academia and industry. Similarly, the Partnership’s eight Tenets consist of both intra-
organisational objectives and wider principles for the development and use of AI. We include only the wider principles (the
first, sixth, and seventh tenets).
17
should “be developed for the common good and the benefit of humanity”, according to
AIUK. The Partnership describes the intention to “ensure that AI technologies benefit
and empower as many people as possible”; while the EGE emphasises the principle of
both “human dignity” and “sustainability”. Its principle of “sustainability” represents
perhaps the widest of all interpretations of beneficence, arguing that “AI technology
must be in line with … ensur[ing] the basic preconditions for life on our planet, continued
prospering for mankind and the preservation of a good environment for future
generations”. Taken together, the prominence of these principles of beneficence firmly
underlines the central importance of promoting the well-being of people and the planet.
Yet the infringement of privacy is not the only danger to be avoided in the adoption
of AI. Several of the documents also emphasise the importance of avoiding the misuse
of AI technologies in other ways. The Asilomar Principles are quite specific on this point,
citing the threats of an AI arms race and of the recursive self-improvement of AI, as well
as the need for “caution” around “upper limits on future AI capabilities”. The Partnership
similarly asserts the importance of AI operating “within secure constraints”. The IEEE
document meanwhile cites the need to “avoid misuse”, while the Montreal Declaration
argues that those developing AI “should assume their responsibility by working against
the risks arising from their technological innovations”, echoed by the EGE’s similar need
for responsibility.
From these various warnings, it is not entirely clear whether it is the people
developing AI, or the technology itself, which should be encouraged not to do harm – in
other words, whether it is Frankenstein or his monster against whose maleficence we
should be guarding. Confused also is the question of intent: promoting non-maleficence
can be seen to incorporate the prevention of both accidental (what we above call
“overuse”) and deliberate (what we call “misuse”) harms arising. In terms of the principle
18
of non-maleficence, this need not be an either/or question: the point is simply to prevent
harms arising, whether from the intent of humans or the unpredicted behaviour of
machines (including the unintentional nudging of human behaviour in undesirable
ways). Yet these underlying questions of agency, intent and control become knottier
when we consider the next principle.
The principle of autonomy is explicitly stated in four of the six documents. The
Montreal Declaration articulates the need for a balance between human- and machine-
led decision making, stating that “the development of AI should promote the autonomy
of all human beings and control… the autonomy of computer systems” (italics added).
The EGE argues that autonomous systems “must not impair [the] freedom of human
beings to set their own standards and norms and be able to live according to them”,
while AIUK adopts the narrower stance that “the autonomous power to hurt, destroy or
deceive human beings should never be vested in AI”. The Asilomar document similarly
supports the principle of autonomy, insofar as “humans should choose how and whether
to delegate decisions to AI systems, to accomplish human-chosen objectives”.
These documents express a similar sentiment in slightly different ways, echoing the
distinction drawn above between beneficence and non-maleficence: not only should the
autonomy of humans be promoted, but also the autonomy of machines should be
restricted and made intrinsically reversible, should human autonomy need to be re-
established (consider the case of a pilot able to turn off the automatic pilot and regain
full control of the airplane). Taken together, the central point is to protect the intrinsic
value of human choice – at least for significant decisions – and, as a corollary, to contain
the risk of delegating too much to machines. Therefore, what seems most important here
is what we might call “meta-autonomy”, or a “decide-to-delegate” model: humans should
always retain the power to decide which decisions to take, exercising the freedom to
19
choose where necessary, and ceding it in cases where overriding reasons, such as efficacy,
may outweigh the loss of control over decision-making. As anticipated, any delegation
should remain overridable in principle (deciding to decide again).
The decision to make or delegate decisions does not take place in a vacuum. Nor
is this capacity to decide (to decide, and to decide again) distributed equally across
society. The consequences of this potential disparity in autonomy are addressed in the
final of the four principles inspired by bioethics.
The emphasis on the protection of social support systems may reflect geopolitics,
insofar as the EGE is a European body. The AIUK report argues that citizens should be
able to “flourish mentally, emotionally and economically alongside artificial intelligence”.
The Partnership, meanwhile, adopts a more cautious framing, pledging to “respect the
interests of all parties that may be impacted by AI advances”.
As with the other principles already discussed, these interpretations of what justice
means as an ethical principle in the context of AI are broadly similar, yet contain subtle
distinctions.
Across the documents, justice variously relates to
a) using AI to correct past wrongs such as eliminating unfair discrimination;
b) ensuring that the use of AI creates benefits that are shared (or at least shareable);
and
c) preventing the creation of new harms, such as the undermining of existing social
structures.
20
Notable also are the different ways in which the position of AI, vis-à-vis people, is
characterised in relation to justice. In Asilomar and EGE respectively, it is AI technologies
themselves that “should benefit and empower as many people as possible” and “contribute
to global justice”, whereas in Montreal, it is “the development of AI” that “should promote
justice” (italics added). In AIUK, meanwhile, people should flourish merely “alongside”
AI. Our purpose here is not to split semantic hairs. The diverse ways in which the
relationship between people and AI is described in these documents hints at broader
confusion over AI as a man-made reservoir of “smart agency”.
Put simply, and to resume our bioethics analogy, are we (humans) the patient,
receiving the “treatment” of AI, the doctor prescribing it? Or both? It seems that we
must resolve this question before seeking to answer the next question of whether the
treatment will even work. This is the core justification for our identification within these
documents of a new principle, one that is not drawn from bioethics.
Taken together, we argue that these five principles capture the meaning of each of
the 47 principles contained in the six high-profile, expert-driven documents, forming an
ethical framework within which we offer our recommendations below. This framework
of principles is shown in Figure B.
Figure B: an ethical framework for AI, formed of four traditional principles and a new one
22
5
R E C O M M E N D ATI O N S FO R
A G O O D A I S O C I E TY
This section introduces the Recommendations for a Good AI Society. It consists of two
parts: a Preamble, and 20 Action Points. There are four kinds of Action Points: to assess,
to develop, to incentivise and to support. Some recommendations may be undertaken
directly, by national or European policy makers, in collaboration with stakeholders
where appropriate. For others, policy makers may play an enabling role for efforts
undertaken or led by third parties.
5.1 Preamble
We believe that, in order to create a Good AI Society, the ethical principles identified in
the previous section should be embedded in the default practices of AI. In particular, AI
should be designed and developed in ways that decrease inequality and further social
empowerment, with respect for human autonomy, and increase benefits that are shared
by all, equitably. It is especially important that AI be explicable, as explicability is a
critical tool to build public trust in, and understanding of, the technology.
5 . 2
A C TI O N P O I N TS
5.2.1 Assessment
1. Assess the capacity of existing institutions, such as national civil courts,
to redress the mistakes made or harms inflicted by AI systems. This
assessment should evaluate the presence of sustainable, majority-agreed
foundations for liability from the design stage onwards in order to reduce
negligence and conflicts (see also Recommendation 5).6
5.2.2 Development
4. Develop a framework to enhance the explicability of AI systems which
make socially significant decisions. Central to this framework is the ability
for individuals to obtain a factual, direct, and clear explanation of the decision-
making process, especially in the event of unwanted consequences. This is
likely to require the development of frameworks specific to different industries,
and professional associations should be involved in this process, alongside
experts in science, business, law, and ethics.
6 Determining accountability and responsibility may usefully borrow from lawyers in Ancient Rome who would go by
this formula ‘cuius commoda eius et incommoda’ (‘the person who derives an advantage from a situation must also
bear the inconvenience’). A good 2,200 years old principle that has a well-established tradition and elaboration could
properly set the starting level of abstraction in this field.
24
7 Of course, to the extent that AI systems are ‘products’, general tort law still applies in the same way to AI as it applies in any
instance involving defective products or services that injure users or do not perform as claimed or expected.
25
10. Develop a European observatory for AI. The mission of the observatory
would be to watch developments, provide a forum to nurture debate and
consensus, provide a repository for AI literature and software (including
concepts and links to available literature), and issue step-by-step recommendation
and guidelines for action.
11. Develop legal instruments and contractual templates to lay the foundation
for a smooth and rewarding human-machine collaboration in the work
environment. Shaping the narrative on the ‘Future of Work’ is instrumental
to winning “hearts and minds”. In keeping with ‘A Europe that protects’, the
idea of “inclusive innovation” and to smooth the transition to new kinds of
jobs, a European AI Adjustment Fund could be set up along the lines of the
European Globalisation Adjustment Fund.
26
5.2.3 Incentivisation
12. Incentivise financially, at the EU level, the development and use of AI
technologies within the EU that are socially preferable (not merely
acceptable) and environmentally friendly (not merely sustainable but
favourable to the environment). This will include the elaboration of
methodologies that can help assess whether AI projects are socially preferable
and environmentally friendly. In this vein, adopting a ‘challenge approach’ (see
DARPA challenges) may encourage creativity and promote competition in the
development of specific AI solutions that are ethically sound and in the interest
of the common good.
5.2.4 Support
18. Support the development of self-regulatory codes of conduct for data and
AI related professions, with specific ethical duties. This would be along the
lines of other socially sensitive professions, such as medical doctors or lawyers,
i.e., with the attendant certification of ‘ethical AI’ through trust-labels to make
sure that people understand the merits of ethical AI and will therefore demand
it from providers. Current attention manipulation techniques may be
constrained through these self-regulating instruments.
C O N C L U S I O N
Europe, and the world at large, face the emergence of a technology that holds much
exciting promise for many aspects of human life, and yet seems to pose major threats as
well. This White Paper – and especially the Recommendations in the previous section
– seek to nudge the tiller in the direction of ethically and socially preferable outcomes
from the development, design and deployment of AI technologies. Building on our
identification of both the core opportunities and the risks of AI for society as well as the
set of five ethical principles we synthesised to guide its adoption, we formulated 20
Action Points in the spirit of collaboration and in the interest of creating concrete and
constructive responses to the most pressing social challenges posed by AI.
With the rapid pace of technological change, it can be tempting to view the political
process in the liberal democracies of today as old-fashioned, out-of-step, and no longer
up to the task of preserving the values and promoting the interests of society and
everyone in it. We disagree. With the Recommendations we offer here, including the
creation of centres, agencies, curricula, and other infrastructure, we have made the case
for an ambitious, inclusive, equitable programme of policy making and technological
innovation, which we believe will contribute to securing the benefits and mitigating the
risks of AI, for all people, and for the world we share.
Acknowledgements
This White Paper would not have been possible without the generous support of Atomium
– European Institute for Science, Media and Democracy. We are particularly grateful to
Michelangelo Baracchi Bonvicini, Atomium’s President, to Guido Romeo, its Editor in
Chief, and the staff of Atomium for their help, and to all the partners of the AI4People
project and members of its Forum (http://www.eismd.eu/ai4people) for their feedback.
The authors of this White Paper are the only persons responsible for its contents and
any remaining mistakes.
30
References
Asilomar AI Principles (2017). Principles developed in conjunction with the 2017 Asilomar conference
[Benevolent AI 2017]. Retrieved September 18, 2018, from https://futureoflife.org/ai-principles
Cowls, J. and Floridi, L. (2018). Prolegomena to a White Paper on Recommendations for the Ethics of AI
(June 19, 2018). Available at SSRN: https://ssrn.com/abstract=3198732.
Cowls, J. and Floridi, L. (Forthcoming). The Utility of a Principled Approach to AI Ethics.
European Group on Ethics in Science and New Technologies (2018, March). Statement on
Artificial Intelligence, Robotics and ‘Autonomous’ Systems. Retrieved September 18, 2018, from
https://ec.europa.eu/info/news/ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en.
Imperial College London (2017, Oct, 11). Written Submission to House of Lords Select Committee on
Artificial Intelligence [AIC0214]. Retrieved September 18, 2018, from http://bit.ly/2yleuET
The IEEE Initiative on Ethics of Autonomous and Intelligent Systems (2017). Ethically Aligned Design,
v2. Retrieved September 18, 2018, from https://ethicsinaction.ieee.org
Floridi, L. (2018). Soft Ethics and the Governance of the Digital. Philos. Technol. 2018, 1-8.
Floridi, L. (2013). The Ethics of Information. Oxford, Oxford University Press.
House of Lords Artificial Intelligence Committee (2018, April, 16). AI in the UK: ready, willing and able?
Retrieved September 18, 2018, from
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
King, T., Aggarwal, N., Taddeo, M., and Floridi, L (2018, May, 22), Artificial Intelligence Crime: An
Interdisciplinary Analysis of Foreseeable Threats and Solutions. Available at SSRN:
https://ssrn.com/abstract=3183238
Montreal Declaration for a Responsible Development of Artificial Intelligence (2017, November, 3).
Announced at the conclusion of the Forum on the Socially Responsible Development of AI. Retrieved
September 18, 2018, from https://www.montrealdeclaration-responsibleai.com/the-declaration.
Partnership on AI (2018). Tenets. Retrieved September 18, 2018, from
https://www.partnershiponai.org/tenets/
Taddeo, M. (2017). The limits of deterrence theory in cyberspace. Philos. Technol. 2017, 1–17.
From left to right: Valéry Giscard dʼEstaing, Jean-Claude Juncker and Michelangelo Baracchi Bonvicini.
S C I E N T I F I C C O M M I T T E E
Luciano Floridi
Chairman, AI4People Scientific Committee; Professor of Philosophy and Ethics of Information
and Director of the Digital Ethics Lab at Oxford University.
Monica Beltrametti
Director Naver Labs Europe.
Thomas Burri
Assistant Professor of International and European Law at University of St. Gallen; Academic
Director Master in International Law at University of St. Gallen; Privatdozent, Dr. iur.
(Zurich), LLM (College of Europe, Bruges), Lic. iur. (Basel), admitted to the Zurich Bar.
Raja Chatila
Director of Research at the French National Center of Scientific Research, Director of the
Institute of Intelligent Systems and Robotics at Pierre and Marie Curie University in Paris,
Director of the Laboratory of Excellence “SMART” on human-machine interaction.
Patrice Chazerand
Director in charge of Digital Economy and Trade Groups at Digital Europe.
Virginia Dignum
Associate Professor Social Artificial Intelligence
Executive Director Delft Design for Values Institute, Faculty of Technology,
Policy and Management, Delft University of Technology.
Robert Madelin
Visiting Research Fellow at the Centre for Technology and Global Affairs,
University of Oxford, and Fipra International.
Christoph Luetge
Professor of Business Ethics at Technische Universität München.
Ugo Pagallo
Professor of Jurisprudence at the Department of Law, University of Turin, faculty at the
Center for Transnational Legal Studies (CTLS) London, faculty fellow at the Nexa Center for
Internet and Society at the Politecnico of Turin.
Francesca Rossi
Professor at the University of Padova, president of the International Joint Conference on
Artificial Intelligence, Associate Editor in Chief of the Journal of Artificial Intelligence Research.
Burkhard Schafer
Professor of Computational Legal Theory, University of Edinburgh Law School.
Peggy Valcke
Research Professor, Centre for IT & IP Law – IMEC, KU Leuven; Visiting Professor Tilburg
University & Bocconi University Milan; Member Scientific Committee CMPF and FSR (EUI
Florence).
Effy Vayena
Chair of Bioethics, Health Ethics and Policy Lab, ETH Zurich.
With the contribution of: