100% found this document useful (2 votes)
35 views

Download ebooks file (Ebook) Reflections on Artificial Intelligence for Humanity (Lecture Notes in Computer Science) by Bertrand Braunschweig (editor), Malik Ghallab (editor) ISBN 9783030691271, 3030691276 all chapters

The document promotes the ebook 'Reflections on Artificial Intelligence for Humanity', edited by Bertrand Braunschweig and Malik Ghallab, which discusses the transformative impact of AI on society and the responsibilities of various stakeholders. It highlights the need for ethical considerations, governance mechanisms, and interdisciplinary research to address the challenges and opportunities presented by AI. The book aims to support initiatives for responsible AI development and to foster global collaboration on related issues.

Uploaded by

davalonikho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
35 views

Download ebooks file (Ebook) Reflections on Artificial Intelligence for Humanity (Lecture Notes in Computer Science) by Bertrand Braunschweig (editor), Malik Ghallab (editor) ISBN 9783030691271, 3030691276 all chapters

The document promotes the ebook 'Reflections on Artificial Intelligence for Humanity', edited by Bertrand Braunschweig and Malik Ghallab, which discusses the transformative impact of AI on society and the responsibilities of various stakeholders. It highlights the need for ethical considerations, governance mechanisms, and interdisciplinary research to address the challenges and opportunities presented by AI. The book aims to support initiatives for responsible AI development and to foster global collaboration on related issues.

Uploaded by

davalonikho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

Download the Full Ebook and Access More Features - ebooknice.

com

(Ebook) Reflections on Artificial Intelligence for


Humanity (Lecture Notes in Computer Science) by
Bertrand Braunschweig (editor), Malik Ghallab
(editor) ISBN 9783030691271, 3030691276
https://ebooknice.com/product/reflections-on-artificial-
intelligence-for-humanity-lecture-notes-in-computer-
science-49175104

OR CLICK HERE

DOWLOAD EBOOK

Download more ebook instantly today at https://ebooknice.com


Instant digital products (PDF, ePub, MOBI) ready for you
Download now and discover formats that fit your needs...

Start reading on any device today!

(Ebook) Vagabond, Vol. 29 (29) by Inoue, Takehiko ISBN


9781421531489, 1421531488

https://ebooknice.com/product/vagabond-vol-29-29-37511002

ebooknice.com

(Ebook) Computer Science – Theory and Applications: 17th


International Computer Science Symposium in Russia, CSR
2022, Virtual Event, June 29 – July 1, 2022, ... (Lecture
Notes in Computer Science, 13296) by Alexander S. Kulikov
https://ebooknice.com/product/computer-science-theory-and-
(editor), Sofya Raskhodnikova (editor) ISBN 9783031095733,
applications-17th-international-computer-science-symposium-in-russia-
csr-2022-virtual-event-june-29-july-1-2022-lecture-notes-in-computer-
3031095731
science-13296-43838102
ebooknice.com

(Ebook) Boeing B-29 Superfortress ISBN 9780764302725,


0764302728

https://ebooknice.com/product/boeing-b-29-superfortress-1573658

ebooknice.com

(Ebook) Jahrbuch für Geschichte: Band 29 ISBN


9783112622223, 3112622227

https://ebooknice.com/product/jahrbuch-fur-geschichte-band-29-50958290

ebooknice.com
(Ebook) Harrow County 29 by Cullen Bunn, Tyler Crook

https://ebooknice.com/product/harrow-county-29-53599548

ebooknice.com

(Ebook) The Collected Papers of Bertrand Russell Volume


29: Détente or Destruction, 1955-57 by Bone,
Andrew;Russell, Bertrand ISBN 9780203004555,
9780415358378, 0203004558, 041535837X
https://ebooknice.com/product/the-collected-papers-of-bertrand-
russell-volume-29-detente-or-destruction-1955-57-11776802

ebooknice.com

(Ebook) Recent Trends in Algebraic Development Techniques:


25th International Workshop, WADT 2020, Virtual Event,
April 29, 2020, Revised Selected Papers (Lecture Notes in
Computer Science, 12669) by Markus Roggenbach (editor)
https://ebooknice.com/product/recent-trends-in-algebraic-development-
ISBN 9783030737849, 3030737845
techniques-25th-international-workshop-wadt-2020-virtual-event-
april-29-2020-revised-selected-papers-lecture-notes-in-computer-
science-12669-38032388
ebooknice.com

(Ebook) 29, Single and Nigerian by Naijasinglegirl ISBN


9781310004216, 1310004218

https://ebooknice.com/product/29-single-and-nigerian-53599780

ebooknice.com

(Ebook) Organometallic Chemistry, Volume 29 by M. Green


ISBN 0854043284

https://ebooknice.com/product/organometallic-chemistry-
volume-29-2440106

ebooknice.com
State-of-the-Art Bertrand Braunschweig
Survey Malik Ghallab (Eds.)

Reflections on
LNAI 12600

Artificial Intelligence
for Humanity
Lecture Notes in Artificial Intelligence 12600

Subseries of Lecture Notes in Computer Science

Series Editors
Randy Goebel
University of Alberta, Edmonton, Canada
Yuzuru Tanaka
Hokkaido University, Sapporo, Japan
Wolfgang Wahlster
DFKI and Saarland University, Saarbrücken, Germany

Founding Editor
Jörg Siekmann
DFKI and Saarland University, Saarbrücken, Germany
More information about this subseries at http://www.springer.com/series/1244
Bertrand Braunschweig •

Malik Ghallab (Eds.)

Reflections on
Artificial Intelligence
for Humanity

123
Editors
Bertrand Braunschweig Malik Ghallab
Inria LAAS-CNRS
Le Chesnay, France Toulouse, France

ISSN 0302-9743 ISSN 1611-3349 (electronic)


Lecture Notes in Artificial Intelligence
ISBN 978-3-030-69127-1 ISBN 978-3-030-69128-8 (eBook)
https://doi.org/10.1007/978-3-030-69128-8
LNCS Sublibrary: SL7 – Artificial Intelligence

© Springer Nature Switzerland AG 2021


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, expressed or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

Artificial Intelligence is significantly affecting humanity. According to several thinkers


and philosophers, this “soft” revolution is comparable to and as disruptive as the
deployment of writing, some five thousand years ago, and printing, a few centuries ago.
As media for human interaction and cognition, writing and printing have deeply
changed social organizations, laws, cities, economy, and science; they have affected
human values, beliefs, and religions. We are possibly witnessing a commensurately
profound but much faster revolution. However, we are not just passive observers.
Every person today is an actor in these dynamics, with different levels of responsibility.
We all need to be well-informed, responsible actors.
We already observe the positive effects of AI in almost every field, from agriculture,
industry, and services, to social interaction, knowledge dissemination, sciences, and
health, including in response to pandemics. We foresee its potential to help address our
sustainable development goals and the urgent challenges for the preservation of the
environment.
We certainly know that there can be no human action, enterprise, or technology
without risks. Those risks related to the safety, security, confidentiality, and fairness of
AI systems are frequently discussed. The threats to free will of possibly manipulative
systems are raising legitimate concerns. The impacts of AI on the economy, employ-
ment, human rights, equality, diversity, inclusion, and social cohesion need to be better
assessed.
The ethical values to guide our choices and appraise our progress in the develop-
ment and use of AI have been discussed through many initiatives, such as the principles
of the Montreal declaration, the OECD principles on AI, or the EU guidelines for
trustworthy AI.
The opportunities and risks are still not sufficiently well assessed. The criteria to
appraise societal desirability may not be universal. Different stakeholders favor dif-
ferent concerns ranging from human rights and environmental preservation, to eco-
nomic growth, profit, or social control. However, despite differences in deployment and
views across different regions, the effects of AI will be increasingly worldwide.
The social acceptability of AI technology is not equivalent to its market acceptance.
More than ensuring consumer engagement by the dissemination of convenient services
at largely hidden global costs, the focus must be on social acceptability, taking into
account long-term effects and possible impacts on future generations. The development
and use of AI must be guided by principles of social cohesion, environmental sus-
tainability, meaningful human activity, resource sharing, inclusion, and recognition of
social and cultural differences. It has to integrate the imperatives of human rights as
well as the historical, social, cultural, and ethical values of democratic societies. It
needs to consider global constraints affecting the environment and international rela-
tions. It requires continued education and training as well as continual assessment of
effects through social deliberation.
vi Preface

Research and innovation in AI are creating an avalanche of changes. These strongly


depend on and are propelled by two main forces: economic competition and political
initiatives. The former provides a powerful and reactive drive; however, it is mostly
governed by short-term, narrow objectives. The latter rely on the former as well as on
slow feedback from social awareness, education, and understanding, which strive to
keep up with the pace of AI technology.
Scientists from AI and the social sciences who are involved in the progress and
comprehension of the field do not have full control over its evolution, but they are not
powerless; nor are they without responsibilities. They understand and guide the state
of the art and what may need to be done to mitigate the negative impacts of AI. They
are accountable for and capable of raising social awareness about the current limitations
and risks. They can choose or at least adapt their research agenda. They can engage
with integrative research and work toward socially beneficial developments. They can
promote research organizations and assessment mechanisms to favor long-term,
cross-disciplinary objectives addressing the social and human challenges of AI.
There is a need for a clear commitment to act in accordance with these responsi-
bilities. Coordinated actions of all stakeholders need to be guided by the principles and
values that allow us to fully assume these responsibilities, including alignment with the
universal declaration of human rights, respect for and solidarity with all societies and
future generations, and recognition of our interdependence with other living beings and
the environment.
This book calls for all interested scientists, technologists, humanists, and concerned
individuals to be involved with and to support initiatives aimed in particular at
addressing the following questions1:
– How can we ensure the security requirements of critical applications and the safety
and confidentiality of data communication and processing? What techniques and
regulations for the validation, certification, and audit of AI tools are needed to
develop confidence in AI? How can we identify and overcome biases in algorithms?
How do we design systems that respect essential human values, ensuring moral
equality and inclusion?
– What kinds of governance mechanisms are needed for personal data, metadata, and
aggregated data at various levels?
– What are the effects of AI and automation on the transformation and social division
of labor? What are the impacts on economic structures? What proactive and
accommodation measures will be required?
– How will people benefit from decision support systems and personal digital
assistants without the risk of manipulation? How do we design transparent and
intelligible procedures and ensure that their functions reflect our values and criteria?
How can we anticipate failure and restore human control over an AI system when it
operates outside its intended scope?
– How can we devote a substantial part of our research and development resources to
the major challenges of our time such as climate, environment, health, and
education?

1
Issues addressed by the Global Forum on AI for Humanity, Paris, Oct. 28–30, 2019.
Preface vii

The above issues raise many scientific challenges specific to AI, as well as inter-
disciplinary challenges for the sciences and humanities. They must be the topic of
interdisciplinary research, social observatories and experiments, citizen deliberations,
and political choices. They must be the focus of international collaborations and
coordinated global actions.
The “Reflections on AI for Humanity” proposed in this book develop the above
problems and sketch approaches for solving them. They aim at supporting the work of
forthcoming initiatives in the field, in particular of the Global Partnership on Artificial
Intelligence, a multilateral initiative launched in June 2020 by fourteen countries and
the European Union. We hope that they will contribute to building a better and more
responsible AI.

December 2020 Bertrand Braunschweig


Malik Ghallab
Organization

Programme Committee of the Global Forum for Artificial


Intelligence for Humanity, October 28–30 2019, Paris
Pekka Ala-Pietilä Huhtamaki, Finland
Elisabeth André University of Augsburg, Germany
Noriko Arai National Institute of Informatics, Japan
Genevieve Bell Australian National University, Australia
Bertrand Braunschweig Inria, France
(Co-chair)
Natalie Cartwright Finn AI, Canada
Carlo Casonato University of Trento, Italy
Claude Castelluccia Inria, France
Raja Chatila Sorbonne University, France
Kate Crawford AI Now Institute and Microsoft, USA
Sylvie Delacroix University of Birmingham and Alan Turing Institute,
UK
Andreas Dengel DFKI, Germany
Laurence Devillers Sorbonne University, France
Virginia Dignum Umeå University, Sweden
Rebecca Finlay CIFAR, Canada
Françoise Fogelman- Soulié Hub France IA, France
Malik Ghallab (Co-chair) CNRS, France
Alexandre Gefen CNRS, France
Yuko Harayama RIKEN, Japan
Martial Hebert Carnegie Mellon University, USA
Holger Hoos Universiteit Leiden, Netherlands
Lyse Langlois Observatoire international sur les impacts sociétaux de
l’intelligence artificielle et du numérique (OBVIA),
Canada
Fei-Fei Li Stanford University, USA
Jocelyn Maclure Laval University, Canada
Ioana Manolescu Inria and École polytechnique, France
Joel Martin National Research Council, Canada
Michela Milano University of Bologna, Italy
Katharina Morik Technical University of Dortmund, Germany
Joëlle Pineau McGill University and Facebook, Canada
Stuart Russell University of California, Berkeley, USA
Bernhard Schölkopf Max Planck Institute for Intelligent Systems, Germany
and ETH Zurich, Switzerland
Hideaki Takeda National Institute of Informatics, Japan
x Organization

Paolo Traverso Fondazione Bruno Kessler, Italy


Junichi Tsujii National Institute of Advanced Industrial Science
and Technology, Japan
Hyun Seung Yang Korea Advanced Institute of Science and Technology,
Korea
Contents

Reflections on AI for Humanity: Introduction . . . . . . . . . . . . . . . . . . . . . . . 1


Bertrand Braunschweig and Malik Ghallab

Trustworthy AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti,
Katharina Morik, Stuart Russell, and Karen Yeung

Democratising the Digital Revolution: The Role of Data Governance . . . . . . 40


Sylvie Delacroix, Joelle Pineau, and Jessica Montgomery

Artificial Intelligence and the Future of Work. . . . . . . . . . . . . . . . . . . . . . . 53


Yuko Harayama, Michela Milano, Richard Baldwin, Céline Antonin,
Janine Berg, Anousheh Karvar, and Andrew Wyckoff

Reflections on Decision-Making and Artificial Intelligence . . . . . . . . . . . . . . 68


Rebecca Finlay and Hideaki Takeda

AI & Human Values: Inequalities, Biases, Fairness, Nudge,


and Feedback Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Laurence Devillers, Françoise Fogelman-Soulié,
and Ricardo Baeza-Yates

Next Big Challenges in Core AI Technology . . . . . . . . . . . . . . . . . . . . . . . 90


Andreas Dengel, Oren Etzioni, Nicole DeCario, Holger Hoos,
Fei-Fei Li, Junichi Tsujii, and Paolo Traverso

AI for Humanity: The Global Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 116


Jocelyn Maclure and Stuart Russell

AI and Constitutionalism: The Challenges Ahead . . . . . . . . . . . . . . . . . . . . 127


Carlo Casonato

Analyzing the Contribution of Ethical Charters to Building the Future


of Artificial Intelligence Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Lyse Langlois and Catherine Régis

What Does “Ethical by Design” Mean? . . . . . . . . . . . . . . . . . . . . . . . . . . . 171


Vanessa Nurock, Raja Chatila, and Marie-Hélène Parizeau

AI for Digital Humanities and Computational Social Sciences . . . . . . . . . . . 191


Alexandre Gefen, Léa Saint-Raymond, and Tommaso Venturini
xii Contents

Augmented Human and Human-Machine Co-evolution: Efficiency


and Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Andreas Dengel, Laurence Devillers, and Laura Maria Schaal

Democratizing AI for Humanity: A Common Goal . . . . . . . . . . . . . . . . . . . 228


Amir Banifatemi, Nicolas Miailhe, R. Buse Çetin, Alexandre Cadain,
Yolanda Lannquist, and Cyrus Hodes

A Framework for Global Cooperation on Artificial Intelligence


and Its Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Pekka Ala-Pietilä and Nathalie A. Smuha

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267


Reflections on AI for Humanity:
Introduction

Bertrand Braunschweig1(B) and Malik Ghallab2


1
Formerly Inria, Paris, France
[email protected]
2
CNRS, LAAS, Toulouse, France
[email protected]

Abstract. This chapter briefly surveys the current situation of AI with


respect to its human and social effects, and to its risks and challenges.
It presents a few global initiatives regarding ethical, social and legal
aspects of AI. It introduces the remaining chapters of the book and briefly
discusses a global cooperation framework on AI and its governance.

1 Context of the Book


Over the last two decades, Artificial Intelligence has moved from a technical
area of interest to a focused community of specialists, to a widely popular issue,
making the media headlines and bringing daily to the limelights new compu-
tational functions and applications. The effectiveness and potential of AI tech-
niques became highly visible, attracting vast private investments and national
R&D plans.
The social interest in AI is naturally amplified since its techniques are the
mediating means between users and the digital world, which plays a predominant
role in personal, social, and economic relations. Comparisons to and competitions
with human in games and several tasks, sometimes transposed and exaggerated
uncritically, have boosted the general attention. This interests is matched with
a growing concern over several risks and infringements related to, for example,
security, confidentiality, exploitation of personal data or opinion manipulation.
The concerns about AI have been expressed in numerous forums and pro-
grams seeking to steer the technical developments toward social good, to mitigate
the risks and investigate ethical issues. This is illustrated through the initiatives
taken by international organizations, such as the United Nations and its spe-
cialized agencies [24,39], the European Union [18,42], or the Organisation for
Economic Cooperation and Development [30]. Many other initiatives have been
taken by technical societies [17], NGOs, foundations, corporations, and academic
organizations [14–16,20–22,25,36].
At the political level, statements from several leaders have placed AI as a
geopolitical issue, a matter of power competition in international relations. Calls
for cooperation have been delivered. Recent G7 summits promoted the idea of
Springer Nature Switzerland AG 2021
B. Braunschweig and M. Ghallab (Eds.): Reflections on Artificial Intelligence
for Humanity, LNAI 12600, pp. 1–12, 2021.
https://doi.org/10.1007/978-3-030-69128-8_1
2 B. Braunschweig and M. Ghallab

setting up a permanent Global Partnership on AI (GPAI), relying on interna-


tional working groups and annual plenary meetings. In that perspective, the
Global Forum on AI for Humanity, held in Paris in October 2019, gathered a
large interdisciplinary audience over five workshops and eight technical sessions.
Its purpose was to provide an initial input to the GPAI working groups. This
book results from the contributions and discussions help at this Global Forum.
It is written by the organizers and moderators of the Forum debates.

2 What Is AI Today

Academic controversies about a proper definition of AI, as a science or as a


technology, about its weak versus various versions of strength, or its symbolic old
fashioned flavor versus its deep numeric one, may have their interest but are not
very relevant to our purpose here. It is sufficient to say that AI techniques have
demonstrated convincing results and a significant potential in the mechanization
of cognitive functions, for perceiving, reasoning, learning, acting and interacting.
These techniques prosper on and enrich a large interdisciplinary background,
mainly from computer science, mathematics, cognitive and neurosciences. They
rely in particular on (i) data-based approaches, from probability, statistics,
and numerical optimization, (ii) model-based approaches, from logic, ontolo-
gies, knowledge representations and structures, (iii) heuristic search and con-
straint propagation methods, and (iv) the fruitful synergies of their algorithmic
integrations. They benefit from the tremendous growth of electronics and com-
munication systems.
AI achievements already cover a broad set of capabilities such as image,
speech and scene recognition, natural language processing and interaction,
semantic information handling and search, automated planning, scheduling, and
diagnosis, or computer aided design and decision making. Significant progress
has been witnessed in almost all academic competitions and challenges which
allow to compare approaches to these capabilities and structure developments.1
Successful applications of AI techniques can be found in almost every area
of industry and services. Medicine and health have attracted significant devel-
opments. The very recent COVID-19 pandemic has already seen numerous pro-
posals, for example in diagnosis and prognosis from medical imaging, protein
structure planning for drug discovery, virus nucleic acid testing, epidemiology
modeling and forecasting, and in text mining and analysis of the scientific litera-
ture.2 Transportation is another area of significant AI developments and invest-
ments, e.g., in autonomous vehicles. Manufacturing and logistics implement AI
over a broad spectrum of deployments, from the design and planning stages
to the production stage with millions of robots in operation integrating more

1
These are, for example, the challenges in image recognition [23], in question answer-
ing [35] and other natural language processing tasks [29], in automated planning
[26], in theorem proving [34], and in logistics and other robotics competitions [33].
2
See [2], an early survey on April 2020 of 140 references.
Reflections on AI for Humanity: Introduction 3

and more AI techniques. Similarly for mining, e.g., to support deep drills explo-
ration or automated open-pit mining. Space applications are among the early
success stories of AI, e.g., [5]. Defense and military applications are a matter
of huge investments, as well as concerns. Precision and green agriculture relies
on a range of sensing, monitoring and planning techniques as well as on versa-
tile robots for weeding and crop management tasks. AI has been adopted very
early in e-commerce for automated pricing, user profiling and (socially dubious)
optimizations. Similarly in finance, e.g., in high frequency trading. Learning and
decision making techniques are extensively used in banking, insurance, and con-
sulting companies. Education institutions are routinely using advanced data and
text management tools (e.g., timetabling, plagiarism detection). Personal tutor-
ing techniques start being deployed.3 Automated translation software and vocal
assistants with speech recognition and synthesis are commonly marketed. This
is also the case for very strong board, card and video games. Motion planning
and automated character animation are successfully used by the film industry.
Several natural language and document processing functions are employed by
the media, law firms and many other businesses. Even graphical and musical
artists experiment with AI synthesis tools for their work.
Key indicators for AI show a tremendous growth over the last two decades
in research, industry and deployments across many countries. For example, the
overall number of peer-reviewed publications has tripled over this period. Fund-
ing has increased at an average annual growth rate of 48%, reaching over $70B
world wide. Out of a recent survey of 2360 large companies, 58% reported adopt-
ing AI in at least one function or business unit [28]. The AI labor demand vastly
exceeds trained applicants and leads to a growing enrollment in AI education,
as well as to incentives for quickly augmenting the AI schooling capacities.4

3 AI Risks and Challenges


AI techniques have clearly demonstrated their great beneficial potential for
humanity. Numerous scientific and technical bottlenecks remain to be overcome,
but progress is accelerating and the current state of the art is already providing
approaches to many social challenges. This is illustrated in particular through
several projects addressing with AI techniques the United Nations Sustainable
Development Goals (SDGs) [38]. AI use cases have been identified for about half
of the 169 SDG targets by a UN initiative on big data and artificial intelligence
for development, humanitarian action, and peace [37].
However, as for any other technology, the development of AI entails risks.
These risk are commensurate with AI impact and potential. Moreover, rapid
technology developments do not leave enough time to social evaluation and ade-
quate regulation. In addition, there are not enough incentives for risk assessment,

3
e.g., [27, 31], the two winner systems of the Global Learning XPrize competition in
May 2019.
4
These and other indicators are detailed in the recent AI Index Report [8].
4 B. Braunschweig and M. Ghallab

in research as well as in industrial development; hence there are many more stud-
ies of new techniques than studies of their entailed risks.5
The main issues of AI are how to assess and mitigate the human, social and
environment risks of its ubiquitous deployments in devices and applications, and
how to drive its developments toward social good.
AI is deployed in safety critical applications, such as health, transportation,
network and infrastructure management, surveillance and defense. The corre-
sponding risks in human lives as well as in social and environmental costs are
not sufficiently assessed. They give rise to significant challenges for the verifica-
tion and validation of AI methods.
The individual uses of AI tools entail risks for the security of digital inter-
action, the privacy preserving and confidentiality of personal information. The
insufficient transparency and intelligibility of current techniques imply other
risks for uncritical and inadequate uses.
The social acceptability of a technology is much more demanding than the
market acceptance. Among other things, social acceptability needs to take into
account the long term, including possible impacts on future generations. It has to
worry about social cohesion, employment, resource sharing, inclusion and social
recognition. It needs to integrate the imperatives of human rights, historical,
social, cultural and ethical values of a community. It should consider global
constraints affecting the environment or international relations.
The social risks of AI with respect to these requirements are significant. They
cover a broad spectrum, from biases in decision support systems (e.g., [7,10]), to
fake news, behavior manipulation and debate steering [13]. They include political
risks that can be a threat to democracy [6] and human rights [9], as well as risks
to economy (implicit price cartels [4], instability of high frequency trading [11])
and to employment [1]. AI in enhanced or even autonomous lethal weapons
and military systems threatens peace, it raises strong ethical concerns, e.g., as
expressed in a call to a ban on autonomous weapons [19].

4 Worldwide Initiatives on the Societal Impact of AI


Many initiatives, studies and working groups have been launched in order to
assess the impacts of AI applications. There are also a few meta-studies that
analyze and compare these initiatives. In this section, we briefly look at four
transnational initiatives backed by major organisations that may have a signif-
icant impact on the development and use of AI, and we discuss two relevant
meta-studies.

The Partnership on AI. This partnership was created by six companies, Apple,
Amazon, Google, Facebook, IBM, and Microsoft, and announced during the
Future of Artificial Intelligence conference in 2016. It was subsequently extended
into a multi-stakeholder organization which now gathers 100 partners from 13
5
E.g., according to the survey [28] 13% companies adopting AI are taking actions for
mitigating risks.
Reflections on AI for Humanity: Introduction 5

countries [32]. Its objectives are “to study and formulate best practices on AI
technologies, to advance the public’s understanding of AI, and to serve as an
open platform for discussion and engagement about AI and its influences on
people and society”. Since its inception, the Partnership on AI published a few
reports, the last one being a position paper on the undesirable use of a specific
criminal risk assessment tool in the COVID-19 crisis.

UNESCO Initiatives. In 2017, the World Commission on the Ethics of Scientific


Knowledge and Technology of UNESCO mandated a working group to develop
a study on the ethics of AI. This led to the publishing in 2019 of a Preliminary
Study on the Ethics of AI [41]. This study has a broader scope than other
similar document as it addresses UNESCO priority issues such as education,
science, culture, peace and the development of AI in less-favored countries. It
concludes with a list of eleven principles to be included in the requirements for
AI applications, such as, human rights, inclusiveness, democracy, sustainability,
quality of life in addition to the usual demandes on transparency, explainability,
and accountability. Following this report, UNESCO created an ad hoc expert
group of 24 specialists from 24 different countries and backgrounds to develop
recommendations on the ethics of AI; the outcome of its work is still pending.

The European Commission’s HLEG. The High Level Expert Group on AI of the
European Commission is among the noticeable international efforts on the soci-
etal impact of AI. Initially composed of 52 multi-disciplinary experts, it started
its work in 2018 and published its first report in December of the same year [18].
The report highlights three characteristics that should be met during the lifecy-
cle of an AI system in order to be trustworthy:“it should be lawful, complying
with all applicable laws and regulations; it should be ethical, ensuring adherence
to ethical principles and values; and it should be robust, both from a technical
and social perspective, since, even with good intentions, AI systems can cause
unintentional harm”. Four ethical principles are stressed: human autonomy; pre-
vention of harm; fairness; explainability. The report makes recommendations
for technical and non-technical methods to achieve seven requirements (human
agency and oversight; technical robustness; etc.).
A period of pilot implementations of the guidelines followed this report,
its results have not yet been published. Meanwhile, the European Commission
released a White Paper on AI [42], which refers to the ethics recommendations
of the HLEG.

The OECD’s Expert Group and Observatory. OECD created an AI Group of


Expert (AIGO) in September 2018, within its Committee on Digital Economy
Policy, composed of approximately 50 delegates from OECD countries, with
invited experts and other contributors in subgroups. The AIGO published a
report [40], which makes recommendations on national policies and sets a few
“principles for responsible stewardship of trustworthy AI”, similar to those of
other organisations, such as
6 B. Braunschweig and M. Ghallab

• Inclusive and sustainable growth and well-being,


• Human-centered values and fairness,
• Transparency and explainability,
• Robustness and safety,
• Accountability.

The OECD’s initiatives are pursued within a Network of Experts in AI,


established in February 2020, as well as an Observatory on AI [30].

Fig. 1. Ethical AI Challenges identified across 59 documents (from [8], p. 149).

Meta-studies: Research Devoted to Analyzing and Comparing Diverse Initiatives.


The general AI principles discussed in 74 document are analyzed in [12]. The
principles are grouped into ten categories (e.g., fairness, transparency, privacy,
collaboration, etc.); the analyzed documents were published between 2017 and
2019 by various organisations. The corresponding website gives access to a 2D-
table with links to referred documents for each analyzed category, for example:

• for the category “Fairness”, the Beijing AI Principles contains the follow-
ing: “making the system as fair as possible, reducing possible discrimination
and biases, improving its transparency, explainability and predictability, and
making the system more traceable, auditable and accountable”.
• for the category “Privacy”, the Montreal AI Declaration states that “Every
person must be able to exercise extensive control over their personal data,
especially when it comes to its collection, use, and dissemination.”
Reflections on AI for Humanity: Introduction 7

Another meta-study analyzes and maps 36 documents from government, com-


panies, and others groups related to AI ethics [3]. The map is designed along
eight dimensions: safety and security, transparency and explainability, fairness
and non-discrimination, human control of technology, professional responsibility,
promotion of human values, international human rights. It allows for convenient
comparisons over these dimensions between the documents. The final version of
this analysis shows that most AI ethics documents address all eight key themes,
showing a global convergence on the issues currently of concern to society.
Finally, let us go back to the AI Index [8] has been monitoring the advance-
ment of AI over several aspects: how science and technology are progressing; how
companies are investing; what is the employment situation in AI; how different
countries are placed in the global competition, etc. In its 2019 report, the Index
also covers 59 documents from associations, government, companies, and think
tanks about ethical AI principles. It summarizes the main topics addressed; the
most popular being fairness, interpretability and explainability, transparency,
accountability, and data privacy (see Fig. 1).

5 Outline of the Book


This book develops the issues discussed at the Global Forum on AI for Humanity.
Each chapter synthesizes and puts into perspective the talks and debates pre-
sented either at a plenary session (for chapters 2 to 10, and 15) or a workshop
(for chapters 11 to 14) of the Forum.
In chapter 2, Raja Chatila and colleagues discuss the motivations for trust-
worthy AI. Human interactions with devices and systems, and social interactions
are increasingly mediated through AI. This entails strong requirements to ensure
trust in critical AI applications, e.g., in health or transportation systems. Tech-
niques and regulations for the explainability, certification and auditing of AI
tools need to be developed. The final part of the chapter examines conditions
and methods for the production of provably beneficial AI systems.
In chapter 3, Sylvie Delacroix and colleagues look at ethical, political and
legal issues with Data governance. The loop from data to information, to knowl-
edge, action and more data collection has been further automated and improved,
leading to stronger impacts, already effective or potential. It is of critical impor-
tance to clarify the mutual dependence of bottom-up empowerment structures
and top-down rules for the social governance of personal data, metadata, and
aggregated data. The chapter ends by exploring the role of data trusts for such
purposes.
Yuko Harayama, Michela Milano and colleagues examine in chapter 4 the
impact of AI on the future of work. The effectiveness of AI in the mechanization
of complex physical and cognitive task has strong economic impacts, as well
as social disruptive capabilities, given in particular its rapid progress. Proac-
tive measures may be needed. This requires a good understanding of the likely
effects of AI on the main economic channels and the transformation of work.
The chapter presents complementary views on economy, job quality and policies
as discussed at the Global Forum.
8 B. Braunschweig and M. Ghallab

Rebecca Finlay and Hideaki Takeda report in chapter 5 about the delegation
of decisions to machines. Delegating simple daily life or complex professional
decisions to a computerized personal assistant, to a digital twin, can amplify
our capabilities or be a source of alienation. The requirements to circumvent
the latter include in particular intelligible procedures, articulate and explicit
explanations, permanent alignment of the machine’s assessment functions with
our criteria, as well as anticipation of and provision for an effective transfer of
control back to human, when desirable.
In chapter 6 Françoise Fogelman-Soulié, Laurence Devillers and Ricardo
Baeza-Yates address the subject of AI & Human values such as equity, protec-
tion against biases and fairness, with a specific focus on nudging and feedback
loop effects. Automated or computer aided decisions can be unfair, because of
possibly unintended biases in algorithms or in training data. What technical
and operational measures can be needed to ensure that AI systems comply with
essential human values, that their use is socially acceptable, and possibly desir-
able for strengthening social bounds.
Chapter 7, coordinated by Paolo Traverso addresses important core AI sci-
entific and technological challenges: understanding the inner mechanisms of
deep neural networks; optimising the neural networks architectures; moving to
explainable and auditable AI in order to augment trust in these systems; and
attempting to solve the talent bottleneck in modern artificial intelligence by using
automated machine learning. The field of AI is rich of technical and scientific
challenges, as can be seen from the examples given in this chapter.
In chapter 8, Jocelyn Maclure and Stuart Russell consider some of the major
challenges for developing inclusive and equitable education, improving health-
care, advancing scientific knowledge and preserving the planet. They examine
how properly designed AI systems can help address some of the United Nations
SDGs. They discuss the conditions required to bring into play AI for these chal-
lenges. They underline in particular that neither neither pure knowledge-based
approaches nor pure machine learning can solve the global challenges outlined
in the chapter; hybrid approaches are needed.
In chapter 9, Carlo Casonato reflects on legal and constitutional issues raised
by AI. Taking many examples from real-world usage of AI, mainly in justice,
health and medicine, Casonato puts the different viewpoints expressed in the
previous chapters into a new perspective, regarding regulations, democracy,
anthropology and human rights. The chapter ends with a proposal for a set
of new (or renewed) human rights, in order to achieve a balanced and constitu-
tionally oriented framework for specific rights for a human-centered deployment
of AI systems.
The question of ethical charters for AI is discussed in chapter 10 by Lyse
Langlois and Catherine Régis. Looking at the current ethical charters landscape
which has flourished extensively in the last years, the chapter examines the
fundamentals of ethics and discusses their relations with law and regulations. It
concludes with remarks on the appropriateness of GPAI, UN and UNESCO to
take the lead in international regulatory efforts towards globally accepted ethics
charters for AI.
Reflections on AI for Humanity: Introduction 9

Continuing on ethical issues related to AI, Vanessa Nurock and colleagues


propose in chapter 11 an in-depth analysis of the notion of “ethics by design”, as
compared to other framing such as, for example, privacy by design, or responsible
innovation. The chapter examines current approaches for applying ethics to AI
and concludes with guidelines for an ethics by design demanding to answer four
questions on “care”.
AI with respect to humanities and social sciences is discussed by Alexandre
Gefen in chapter 12 from two perspectives: as an important topic of investigation
and as a new mean for research. The questions about AI and its human and
social consequences are invading the public sphere through the multiple issues of
acceptability, privacy protection or economic impact, requiring the expertise and
strong involvement of every area of Humanities and Social Sciences. AI offers
also new tools to social sciences, humanities and arts, including massive data
extraction, processing, machine learning and wide network analysis.
In chapter 13 Andreas Dengel and Laurence Devillers report on the state
of the art of Human-Machine Co-Creation, Co-Learning and Co-Adaptation,
and discuss how to anticipate corresponding ethical risks. Human ambiguous
relationships with symbiotic or autonomous machines raise numerous ethical
problems. Augmented intelligence and superintelligent AI are main topics for
the future of human society. The robotic simulation has the virtue of question-
ing the nature of our own intelligence. Capturing, transmitting and mimicking
our feelings will open up new applications in health, education, transport and
entertainment.
Chapter 14, by Nicolas Miailhe and colleagues, is devoted to “AI Commons”,
a global non-profit initiative which aims to democratize responsible adoption and
deployment of AI solutions for social good applications addressing the seventeen
UN SDGs. This project brings together a wide range of stakeholders around inno-
vative and holistic problem “identification-to-solution” frameworks and proto-
cols. Its ultimate objectives are to pool critical AI capabilities (data, algorithms,
domain specific knowledge, talent, tools and models, computing power and stor-
age) into an open and collaborative platform that can be used to scale up the
use of AI for Everyone.
Finally, Pekka Ala-Pietilä and Nathalie Smuha conclude the book with a
framework for global cooperation on AI and its governance. This is certainly
an essential issue in a critical period for AI. The chapter clarifies why such a
governance is needed jointly with international cooperation. It lists the main
areas for which international cooperation should be prioritized, with respect
the socio-technical environment of AI in a transversal manner, as well as with
respect to the socio-technical environments of data and digital infrastructure,
these two dimensions being are tightly coupled. It concludes assessing how global
cooperation should be organized, stressing the need to balance speed, holism and
contextualism, and providing a number of guiding principles that can inform the
process of global cooperation initiatives on AI and its governance.
This book collects views from leading experts on AI and its human, ethical,
social, and legal implications. Each chapter is self-contained and addresses a
10 B. Braunschweig and M. Ghallab

specific set of issues, with links to other chapters. To further guide the reader
about the organization of the covered topics, a possible clustering (with overlaps)
of these “Reflections on Artificial Intelligence for Humanity” is the following:
• chapters 7, 13 and 14 are mainly devoted to technological and scientific chal-
lenges with AI and at some developments designed to address them;
• chapters 5, 6, 10, and 11 focus on different ethical issues associated with AI;
• chapters 2, 3, 4, 5, and 6 cover the social impacts of AI at the workplace and
in personal applications;
• chapters 7, 8, 12 and 13 discuss the possible benefits and risks of AI in several
area such as health, justice, justice, education, humanities and social sciences;
• chapters 3, 9, 14, and 15 addresses legal and organizational issues raised by
AI.

6 What’s Next: An Opening for GPAI


The GFAIH forum was a step in the preparation of GPAI, the Global Partnership
on Artificial Intelligence. Launched by France and Canada on the sidelines of the
Canadian presidency of the G7, this initiative aims to organize an independent
global expertise on the ethical regulation of AI.
Following the Franco-Canadian Declaration on AI of June 7, 2018, and the
production of a mandate for an international group of experts in artificial intel-
ligence (G2IA), France and Canada jointly decided to include the GPAI on the
agenda of the French presidency of the G7, in order to place this initiative in a
multilateral framework. The G7 digital ministerial meeting on May 2019 helped
secure the support of Germany, Italy, Japan, the United Kingdom, New Zealand,
India and the European Union for the launch of the GPAI. The G7 summit in
Biarritz on 24–26 August 2019 made it possible to obtain the support of the
G7 States for this initiative, renamed the Global Partnership on AI (GPIA)
and of the four invited countries (India, Chile, South Africa and Australia) and
New Zealand, giving a strong political mandate to the initiative thanks to the
Biarritz Strategy for an open, free and secure digital transformation. Canada
and France also agreed on a tripartite structure for the PMIA, consisting of two
centres of expertise in Paris and Montreal and a secretariat hosted at the OECD
in Paris to avoid work duplication and maximize synergies, while maintaining a
strict independence of the experts’ work. A major step was taken on June 15th,
2020, when fifteen countries - among which all G7 - members simultaneously
announced the launch of the Partnership and their commitment to make it a
success.
This initiative will permit an upstream dialogue between the best scientists
and experts and public decision-makers, which is a key condition for designing
effective responses and recommendations necessary to cope with current and
future challenges faced by our societies. The GPAI will produce, on a compre-
hensive, objective, open and transparent basis, analyses of scientific, technical
and socio-economic information relevant to understanding the impacts of AI,
encouraging its responsible development, and mitigating its risks. This work will
Reflections on AI for Humanity: Introduction 11

follow a project-based approach, with a strong technical dimension. Comple-


mentary to other approaches such as the four initiatives mentioned above, the
work of GPAI will be mostly driven by science and will include representative
experimentation to support its recommendations.
Four working groups have been initially identified in GPAI on, respectively,
the issues of responsible AI, data governance, future of work, innovation and
commercialization. A fifth working group on the response to the current pan-
demic situation and to other possible pandemics has been created as a subgroup
of “Responsible AI”. There is a clear link between the topics of the Global forum,
the chapters of this book and the four main working groups of GPAI: the “data
governance” and “future of work” themes are direct matches, whereas several
chapters contribute to “Responsible AI” (chapters 2, 5, 6, 7, 11 in particular)
and to “Innovation and commercialization” (chapters 2, 7, 8, 15 in particular).
The first plenary meeting of GPAI experts took place online in early December
2020,6 the second will take place in Paris in 2021.
It has become crucial to consolidate democracies at a time when technolog-
ical competition is intensifying, while the risks of Internet fragmentation and
AI social impacts are deepening. GPAI aspires to bring together like-minded
countries, sharing the same democratic values in order to promote a socially
responsible, ethical vision of AI.

References
1. Arntz, M., Gregory, T., Zierahn, U.: The Risk of Automation for Jobs in OECD
Countries. OECD Social, Employment and Migration Working Papers (189)
(2016). https://doi.org/10.1787/5jlz9h56dvq7-en. https://www.oecd-ilibrary.org/
content/paper/5jlz9h56dvq7-en
2. Bullock, J., Luccioni, A., Pham, K.H., Lam, C.S.N., Luengo-Oroz, M.: Mapping
the landscape of artificial intelligence applications against covid-19. arXiv (2020).
https://arxiv.org/abs/2003.11336
3. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial
intelligence: mapping consensus in ethical and rights-based approaches to principles
for AI. Technical report 2020, Berkman Klein Center Research Publication (2020).
https://doi.org/10.2139/ssrn.3518482
4. Gal, M.S.: Illegal pricing algorithms. Commun. ACM 62(1), 18–20 (2019)
5. Muscettola, N., Nayak, P.P., Pell, B., Williams, B.C.: Remote agent: to boldly go
where no AI system has gone before. Artif. Intell. 103, 5–47 (1998)
6. Nemitz, P.: Constitutional democracy and technology in the age of artificial intel-
ligence. Philos. Trans. Roy. Soc. A: Math. Phys. Eng. Sci. 376(2133), 1–14 (2018)
7. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. Crown Random House, New York (2016)
8. Perrault, R., et al.: The AI index 2019 annual report. Technical report, Stanford
University (2019). http://aiindex.org
9. Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., Kim, L.Y.: Artificial Intel-
ligence & Human Rights: Opportunities & Risks. SSRN, September 2018

6
See http://gpai.ai.
12 B. Braunschweig and M. Ghallab

10. Skeem, J.L., Lowenkamp, C.: Risk, Race, & Recidivism: Predictive Bias and Dis-
parate Impact. SSRN (2016)
11. Sornette, D., von der Becke, S.: Crashes and High Frequency Trading. SSRN,
August 2011
12. Zeng, Y., Lu, E., Huangfu, C.: Linking artificial intelligence principles. arXiv
(2018). https://arxiv.org/abs/1812.04814v1
13. Zuboff, S.: The Age of Surveillance Capitalism. PublicAffairs, New York (2019)
14. AI for good foundation. https://ai4good.org/about/
15. AI now institute. https://ainowinstitute.org/
16. Ai4People. http://www.eismd.eu/ai4people/
17. Ethically Aligned Design. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead v2.pdf
18. EU high level expert group on AI. https://ec.europa.eu/digital-single-market/en/
high-level-expert-group-artificial-intelligence
19. The Future of Life Institute. https://futureoflife.org/open-letter-autonomous-
weapons/?cn-reloaded=1
20. The Global Challenges Foundation. https://globalchallenges.org/about/the-
global-challenges-foundation/
21. Human-Centered AI. http://hai.stanford.edu/
22. Humane AI. http://www.humane-ai.eu/
23. Image Net. http://image-net.org/
24. International Telecommunication Union. https://www.itu.int/dms pub/itu-s/
opb/journal/S-JOURNAL-ICTS.V1I1-2017-1-PDF-E.pdf
25. International Observatory on the Societal Impacts of AI. https://observatoire-ia.
ulaval.ca/
26. International Planning Competition. http://icaps-conference.org/index.php/
Main/Competitions
27. Kitkit School. http://kitkitschool.com/
28. Mckinsey Global Institute. https://www.mckinsey.com/featured-insights/
artificial-intelligence/global-ai-survey-ai-proves-its-worth-but-few-scale-impact
29. NLP Competitions. https://codalab-worksheets.readthedocs.io/en/latest/
Competitions/#list-of-competitions
30. OECD AI Policy Observatory. http://www.oecd.org/going-digital/ai/oecd-
initiatives-on-ai.htm
31. OneTab. https://onebillion.org/
32. Partnership on AI. https://www.partnershiponai.org/research-lander/
33. RoboCup. https://www.robocup.org/
34. SAT Competitions. http://satcompetition.org/
35. SQuAD Explorer. https://rajpurkar.github.io/SQuAD-explorer/
36. UK center for the governance of AI. https://www.fhi.ox.ac.uk/governance-ai-
program/
37. Un Global Pulse. https://www.unglobalpulse.org/
38. Un Sustainable Development Goals. https://sustainabledevelopment.un.org/?
menu=1300
39. UNESCO. https://en.unesco.org/artificial-intelligence
40. Deliberations of the expert group on artificial intelligence at the OECD (2019).
https://www.oecd-ilibrary.org/
41. Preliminary study on the ethics of artificial intelligence (2019). https://unesdoc.
unesco.org/
42. EU white paper on AI (2020). https://ec.europa.eu/info/publications/white-
paper-artificial-intelligence-european-approach-excellence-and-trust en
Trustworthy AI

Raja Chatila1(B) , Virginia Dignum2 , Michael Fisher3 , Fosca Giannotti4 ,


Katharina Morik5 , Stuart Russell6 , and Karen Yeung7
1
Sorbonne University, Paris, France
[email protected]
2
Umea University, Umeå, Sweden
3
University of Manchester, Manchester, UK
4
CNR Pisa, Pisa, Italy
5
TU Dortmund University, Dortmund, Germany
6
University of California, Berkeley, USA
7
University of Birmingham, Birmingham, UK

Abstract. Modern AI systems have become of widespread use in almost


all sectors with a strong impact on our society. However, the very meth-
ods on which they rely, based on Machine Learning techniques for pro-
cessing data to predict outcomes and to make decisions, are opaque,
prone to bias and may produce wrong answers. Objective functions opti-
mized in learning systems are not guaranteed to align with the values that
motivated their definition. Properties such as transparency, verifiability,
explainability, security, technical robustness and safety, are key to build
operational governance frameworks, so that to make AI systems justi-
fiably trustworthy and to align their development and use with human
rights and values.

Keywords: Human rights · Machine learning · Interpretability ·


Explainability · Dependability · Verification and validation · Beneficial
AI

This chapter addresses different aspects of trustworthiness of AI systems.


It is a collective contribution from Virginia Dignum (for Sect. 1), Raja Chatila
(Sect. 2), Katharina Morik (Sect. 3), Fosca Giannotti (Sect. 4), Michael Fisher
(Sect. 5), Karen Yeung (Sect. 6), and Stuart Russell (Sect. 7).

1 The Necessity of Trustworthy AI


The recent developments in Artificial Intelligence (AI) hold great promises for
humanity and society. However, as with any potentially disruptive innovation,
AI also brings challenges, in particular where it concerns safety, privacy, bias,
impact on work and education, and how the align legislation and regulations with
the rapid changes of AI technology. A responsible approach to development and
Springer Nature Switzerland AG 2021
B. Braunschweig and M. Ghallab (Eds.): Reflections on Artificial Intelligence
for Humanity, LNAI 12600, pp. 13–39, 2021.
https://doi.org/10.1007/978-3-030-69128-8_2
14 R. Chatila et al.

use of AI is needed to facilitate trust in AI and ensure that all can profit from
the benefits of AI. This can guard against the use of biased data or algorithms,
ensure that automated decisions are justified and explainable, and help maintain
privacy of individuals.
In recent years, we have seen a rise of efforts around the ethical, societal
and legal impact of AI. These are the result of concerted action by national and
transnational governance bodies, including the European Union, the OECD, the
UK, France, Canada and others, but have often also originated from bottom-up
initiatives, launched by practitioners or the scientific community. A few of the
most well-known initiatives are:

– IEEE initiative on Ethics of Autonomous and Intelligent Systems1


– High Level Expert Group on AI of the European Commission2
– the Partnership on AI3
– the French AI for Humanity strategy4
– the Select Committee on AI of the British House of Lords5

These initiatives aim at providing concrete recommendations, standards and


policy suggestions to support the development, deployment and use of AI sys-
tems. Many others have focused on analysing the values and principles to which
AI systems and promoting specific principles to which the development and
thereof should adhere. In fact, hardly a week goes by without news about yet
another declaration of principles for AI, or of other initiatives at national or cor-
porate level. For up-to-date information on all such initiatives, check Alan Win-
field’s blog6 or the crowdsourced effort coordinated by Doteveryone.7 Moreover,
several groups have provided detailed analysis and comparison of the different
proposals [16,34].
Trustworthy AI, as defined by the High level expert group on AI from the
European Union8 is

1. lawful, i.e. complying with all applicable laws and regulations


2. ethical, i.e. ensuring adherence to ethical principles and values
3. robust, both from a technical and social perspective since, even with good
intentions, AI systems can cause unintentional harm.

In order to achieve trustworthy AI, it is as important to understand the prop-


erties of AI technology, as determined by the advances in computation techniques
and data analytics. AI technology is an artefact, a software system (possibly
1
https://ethicsinaction.ieee.org/.
2
https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-
intelligence.
3
https://www.partnershiponai.org/.
4
https://www.aiforhumanity.fr/en/.
5
https://www.parliament.uk/ai-committee.
6
http://alanwinfield.blogspot.com/2017/12.
7
https://goo.gl/ibffk4 (maintained in Google docs).
8
https://ec.europa.eu/newsroom/dae/document.cfm?doc id=60419.
Trustworthy AI 15

embedded in hardware) designed by humans that, given a complex goal, is able


to take a decision based on a process of perception, interpretation and reasoning
based on data collected about that environment. In many case this process is
considered ‘autonomous’ (by which it is meant that there may be limited need
for human intervention after the setting of the goals), ‘adaptive’ (meaning that
the system is able to update its behaviour to changes in the environment), and
‘interactive’ (given that it acts in a physical or digital dimension where people
and other systems co-exist). Even though many AI systems currently only exhibit
one of these properties, it is their combination that is at the basis of the current
interest on and results of AI, and that fuels public’s fears and expectations [11].
Guidelines, principles and strategies must be directed to these socio-technical
systems. It is not the AI artefact that is ethical, trustworthy, or responsible.
Rather, it is the social component of the socio-technical system that can and
should take responsibility and act in consideration of an ethical framework such
that the overall system can be trusted by the society. Trustworthy AI, or AI
ethics, is not about giving machines some kind of ‘responsibility’ for their actions
and decisions, and in the process, possibly discharge people and organisations of
their responsibility. On the contrary, trustworthy AI requires more responsibility
and more accountability from the people and organisations involved: for the
decisions and actions of the AI applications, and for their own decision of using
AI on a given application context.
Moreover, it is important to realise that any requirements for trustworthy AI,
such as those proposed by the several initiatives we list above, are necessary but
not sufficient to develop human-centered AI. That is, such requirements need
be understood and implemented from a contextual perspective, i,e, it should be
possible to adjust the implementation of the requirement such as transparency
based on the context in which the system is used. I.e. requirements such as
transparency should not have one fixed definition for all AI systems, but rather be
defined based on how the AI system is used. At the same time, any AI technique
used in the design and implementation should be amenable to explicitly consider
all ethical requirements. E.g. it should be possible to explain (or to show) how
the system got to a certain decision or behavior.
In the remainder of this chapter, we explore the many different aspects that
are included in, or result from a responsible approach to AI development and
use, which truly enable trustworthy AI.

2 The Meaning of Trust Regarding Machines


2.1 Technical Trust
Any technology is developed to provide a service fulfilling some needs. When
deployed, its adoption depends on its ability to actually deliver the expected
service safely, and to meet user expectations in terms of quality and continuity
of service. In addition, the users expect that the technology will not do something
it’s not supposed to do, i.e., about which they were not informed. These are very
basic conditions that one can apply to any technological object or systems, from
16 R. Chatila et al.

a toaster in your kitchen to an airliner. If people are convinced that a technology


has these features, they will use it, trusting it will deliver the expected service.
In addition to this, long term impacts should also be considered but are often
discarded or neglected, compared to immediate short term gains.
Like other technologies, computerized socio-technical systems, i.e., those
based on algorithmic computations and decisions that impact human individu-
als and society in a way or another, must be trustworthy. This implies several
attributes that have been classically addressed in software engineering under the
general designation of ‘dependability’ which is defined as the “delivery of service
that can justifiably be trusted” [4]. This entails the following properties:
– Availability: readiness for correct service;
– Reliability: continuity of correct service;
– Safety: absence of catastrophic consequences on the user(s) and the environ-
ment;
– Confidentiality: absence of unauthorized disclosure of information;
– Integrity: absence of improper system alterations;
– Maintainability: ability to undergo, modifications, and repairs.
– Security: the concurrent existence of availability for authorized users only,
confidentiality, and integrity (with ‘improper’ meaning ‘unauthorized’).
The realization of these properties includes verification and validation tech-
niques (see Sect. 5) and has become essential in sectors in which critical func-
tions are assumed by computer systems. Such functions are in particular those
which failure entails major disruptions of the service delivered by the systems,
which might lead to catastrophic consequences involving human lives. Computer
systems engineering has developed a whole body of research and methods on
dependable systems, largely applied in the Aeronautics industry in particular,
or electricity distribution networks control.
These techniques have been rather ignored or minimized recently with the
recent development of learning AI-based systems. Indeed, learning techniques
based on statistics and on detecting regularities in data use millions of param-
eters which are not explicitly in a causal relation with the results, hence the
blackbox depiction of these systems. The results, even if reaching high levels
of accuracy, are not explainable. Worse, they can be totally wrong [1], actually
showing lack of semantics in these systems.
This lack of explainability is an important factor in reducing trust in the
system, and has motivated a wide interest in research [6], see Sects. 3 and 4
which provide two views on explainability. It is only through reaching a high
and provable level of technical robustness and safety, that AI systems can be
technically trusted.
An important question has to be clarified in this context, as in some appli-
cations such as automated driving, or autonomous weapons, there are discus-
sions about the possibility that ethical decisions could be delegated to machines.
Ethics are founded on the abstract notion of human dignity and are related to
human autonomy and agency, the capacity to deliberate and to act freely and
intentionally. Machines (i.e., digital computers) on the other hand operate at
Trustworthy AI 17

the syntactic computational level and can only decide and act within a bounded
set of possibilities defined directly or indirectly (e.g., through machine learning)
by human programmers. It is therefore not possible that machines take ethical
decisions, even if their actions could have ethical consequences. This means that
no decisions implying ethical deliberation with critical consequences should be
delegated to machines.

2.2 Governance

However, technical solutions are only one necessary condition. If there is no


framework to facilitate of even impose their adoption, there will be no guaran-
tee that they are actually embedded commercial systems. Therefore governance
issues become another condition for trust.
Indeed, comparing with other sectors, technical standards, certification pro-
cesses by independent and recognized authorities, audit mechanisms and regu-
lations imposing this mechanism, are essential factors in the build of trust and
adoption of technologies. Other factors are related to ethics as well as to soft
law approaches that could lead private companies to adopt virtuous design and
development processes. All theses issues are related to governance issues that are
largely discussed for instance in [17] and in [32]. Perhaps one of the most relevant
list of recommendations in this effect are the “Ethics Guidelines for Trustworthy
AI” issued by the High-Level Expert Group on AI appointed by the European
Commission [29], (see Sect. 1. Two of the seven “Key requirements for Trustwor-
thy AI” directly point to necessary governance mechanisms:

– Transparency. The data, system and AI business models should be transpar-


ent. Traceability mechanisms can help achieving this. Moreover, AI systems
and their decisions should be explained in a manner adapted to the stake-
holder concerned. Humans need to be aware that they are interacting with an
AI system, and must be informed of the system’s capabilities and limitations.
– Accountability. Mechanisms should be put in place to ensure responsibility
and accountability for AI systems and their outcomes. Auditability, which
enables the assessment of algorithms, data and design processes plays a key
role therein, especially in critical applications. Moreover, adequate and acces-
sible redress should be ensured.

3 The Difficulty of Understanding

The pioneering work “Learning interpretable models” [54] starts with the saying
of Henry Louis Mencken:

There is always an easy solution to every human problem –


neat, plausible, and wrong.

This directly leads us to the problem of understanding with its two faces, the
complexity of what is to be explained, and the human predilection for simple
18 R. Chatila et al.

explanations that fit into what is already known. When applying the saying to
understanding AI systems, we may state that AI systems are not neat and are
based on assumptions and theories that are not plausible in the first instance.
Since we are not interested in wrong assertions, we exclude easy solutions and
take a look at the complexity of AI systems and human understanding.

3.1 Complexity of AI Systems


Computer systems are ubiquitous and many of them entail some AI processes,
which may interact with each other. The user might perceive just the embedding
system, possibly not aware of what is going on behind the scenes.
A search engine or social network platform, for instance, shows a band of
advertisements along with the search results. An online auction determines for
each query of a user which brands are displayed. Companies (buyers) bid to show
an ad and are rewarded for the served ad or for the increase in product purchases
by the brands (marketers) which they represent. The buyers compete with each
other and, internally, each buyer selects among his marketers which brand has
the best chance to win the auction [58]. Moreover, marketers adapt their websites
to the likelihood of being selected. At least four systems are involved here: the
embedding system (e.g. a search engine), the auction system running real-time
bidding, the buyer, and the marketer system. Each of these put machine learning
to good use. The buyer learns the probability that an ad is clicked by the user
or even leads to a sale, another learning program of the buyer optimizes the
price for a bid. The marketer learns a model that relates the wording and the
images at its website to the success of being presented to the user or selected
by the buyer. For each learning algorithm to be understood at an abstract level,
knowledge of statistics and optimization is required. The interaction of all the
systems leads to particular ads at the display and adds even more complexity.
Finally, the data about the brands, the click-through data of users, and the data
about the auction results are extremely high-dimensional and for learning they
are sampled in various ways.
If the recommendation of a brand is justified by similarity with users who
clicked on the ad of this brand, we have to admit that the notion of “similarity”
here is itself complex. Where users might think of personality traits, interests, or
location, the system calculates the distance between two entities in a much more
detailed granularity. Actually, a thousand features of the user data are weighted
to form a vector whose cosine angle with the vector of another user indicates
the similarity, or some other kernel function computes the similarity between
the data of two users. If some clustering algorithm groups users according to
similarity, their heuristic search procedure is not deterministic, i.e. results may
vary even on the same data set using the same learning algorithm. Hence, the
underlying similarity is not of a kind that the user would call “similarity”.
An analysis process is actually a sequence of steps and some of them are again
composed of sequences. As if that would not be hard enough to understand, the
overall process and its sub-processes are subject to optimization themselves.
In interaction with the developer, RapidMiner recommends enhancements for
Trustworthy AI 19

an analysis process based on its learning from processes9 . Moreover, the sys-
tem creates and selects features using multi-objective optimization [43]. Many
auto modeling approaches are around today [31,35,40]. The self-optimization
of machine learning also applies to the level of implementing the algorithms
on hardware architectures [7,37]. Hence, even if the statistical formula and the
abstract algorithm is well understood by a user, there remains the part of the
actual implementation on a particular computing architecture including all the
optimizations.
Machine learning algorithms themselves are often compositions. In the sim-
plest case, an ensemble of learned models outputs their majority vote. In the
more complex setting of probabilistic graphical models, nodes with some states
are linked to form a graph. The structure of the graph indicates the condi-
tional independence structure of the nodes, given their neighboring nodes. Here,
the design of the nodes and their neighborhoods may involve human knowledge
about the domain which is modeled. This eases the understanding of the model.
The likelihood of a node’s state depends on the states of all the other nodes,
whose likelihood, in turn, are estimated based on observations. Graphical mod-
els estimate a joint probability distribution over all the states of all the nodes.
Understanding this requires statistical reasoning. The inference of the likelihood
of a certain state of a subset of the nodes, i.e. the answer to a question of a
user is a hard problem. There exists a variety of algorithms that approximate
the inference. For a user with statistical knowledge, the explicit uncertainty that
comes together with a model’s answer, helps the reflection about how reliable
the answer is. However, at another level, within the most prominent classes,
variational inference, (loopy) belief propagation, and (Gibb’s) sampling, diverse
algorithms have been developed for specific computing architectures and each
implementation comes along with its own error bounds, memory, energy, and
run-time demands.
Deep learning methods are composed of several functions, organized into lay-
ers. Between the input nodes and the output nodes are several layers of different
types that transform the high-dimensional input step by step into higher-level
features such that in the end a classification can be performed in a better rep-
resentation space with fewer dimensions. Given the observations and their class
membership, learning – or, to be more precise: its optimization procedure –
delivers features and local patterns at the intermediate layers. Sometimes and
especially for pictures that can be interpreted by every user visualizations of the
intermediate local patterns can be interpreted, e.g., the eye areas of faces. Most
often, the intermediate representations learned to do not correspond to high-level
features that human experts use. There are almost infinitely many architectures
that combine different layer types. Setting up the training has high degrees of
freedom, in addition. We know that deep neural networks are capable of learn-
ing every function approximately. However, we do not know whether a particular
network architecture with a particular learning set-up delivers the best model.
It is most likely, that better models exist, but the only way to find them is trial
9
See https://rapidminer.com/blog/.
20 R. Chatila et al.

and error. The theoretical propositions of error bounds and resource demands
are not always available. Explanation approaches work on the network with the
trained weights and learn an explanation on top of it [56]. A well-known tech-
nique is the Layer-wise Relevance Propagation [5]. Understanding the principles
of deep learning and its explanation requires sound knowledge in optimization
and algorithmics. Understanding the explanation itself is easy if pictures are clas-
sified because their parts are interpretable. For more abstract signals, already
the understanding of the explanation requires some training. In sum, the many
development decisions at several levels of abstraction that make up for an AI
system are complex both in themselves and in their interaction.

3.2 Human Understanding


The broad field of human understanding is studied in cognitive psychology, edu-
cation, philosophical epistemology [13,19,21,27]. The meaning of “understand-
ing” is closely related with “knowing” and “explaining” and discussing it has
always been a stimulating subject in AI research. In the early days, AI systems
were designed to explain human behavior, because with the systems, experiments
can be made which are otherwise impossible, and properties like the complexity
of reasoning could be proven mathematically (e.g. [46,49]).
More recently, attention moved to the human understanding of AI systems
[18]. Here, we ask at which level a system is to be understood and which capabil-
ities of the human decision-maker or user match which type of explanation. As
has been shown in the previous section, understanding the principles of an AI
system requires some statistical knowledge and familiarity with optimization.
We are all born with the mathematical sense wired into the brain [9] so that
we can learn this. The problem is that we have to learn it and not everybody
did it. As has been shown in a study comparing mathematicians and scientists
from other disciplines, there seem to be different areas of the brain responsible
for numeric and linguistic processing [2]. Since we want also users from other
disciplines to understand the systems they use, we might think about explain-
ing the involved math linguistically. However, it could be shown that linguistic
notions of quantities are hard to understand [60]. Hence, depending on the type
of training and the given knowledge of a user, different ways of understanding
are to be supported.
There have been many proposals of which questions should be answered by
explanations of AI systems or, turning it the other way around, which answers
indicate a human understanding of a system [30]. Building the right mental
model of how the system works and what it does requires some years of training
and intensive studies. This is for developers. Scientists work on answers toward
questions like “When does the system fail?” proving error rates and guarantees of
robustness. Understanding why a certain system does what it does is the subject
of research, investigating an algorithm and its implementations on a particular
computing architecture. We still do not know all answers to this question for all
systems. Understanding in the sense of being able to rebuild or reconstruct it is
a matter of research.
Trustworthy AI 21

Understanding might also be indicated by knowing the answer to “How do


I use it?” and giving a good estimate of the kind of the system’s actions and
their result. This is the level of understanding that regular users have of their
smartphones and the applications that are installed on it. Without knowing how
it works, they were immediately able to use it and trusted it right away. Offering
a helpful metaphor to users eases the usage of systems. The system developers
must take care that this surrogate model of the system’s functionality is not
misleading. Since we understand human actions through ascribing an intention,
users often apply this to systems as well, but systems do not have an intention,
developers and producers have (cf. Sect. 5). It is the task of system developers
to design such that the limitations of a system can easily be derived from the
usage guiding metaphor or by other means.
Understanding might be related to the data and the process that has gener-
ated them. In this respect, interactive exploration of the data and the learned
model serve the users’ understanding of their data. An example is machine learn-
ing for the sciences, where, e.g., biologists and doctors analyze genomic data with
the help of a learning method that delivers a model which can be inspected by
the experts. The impact of plausibility for interpreting models is investigated
with respect to rule models [18]. More details on this type of understanding can
be found in Sect. 4.
An important meaning of understanding is to know the particular properties
of a system and their impact. The results of the research are to be transferred into
labels allowing decision-makers to match the requirements of their application
and the characteristics of a system. This moves beyond the fact sheet or model
card approaches of IBM and Google which document systems [3,45]. Theoretical
bounds of the error, the resource consumption (runtime, memory, energy), the
fairness, robustness, and the covered type of learning tasks can be expressed by
care labels for implemented algorithms on a certain computing platform, similar
to care labels of textiles and washing machines or dryers. The novel care labels
neither require a particular training nor interest in AI methods. They turn the
stock of knowledge about AI systems into guarantees for their use. The particular
set of care labels can only be worked out by a common undertaking of many
scientists because it implies testing procedures that verify the care label for a
certain implemented method. At the same time, working on it indicates where
further research is needed. This brings us from examining human understanding
back to the research in the complexity of the systems.

4 Explainability – Opening the Black Box

Explainability is at the heart of Trustworthy AI and must be guaranteed


for developing AI systems aimed at empowering and engaging people, across
multiple scientific disciplines and industry sectors. In multiple practical decision
making scenarios, human-machine symbiosis is needed, with humans keeping the
responsibility for the decisions, but relying on machine aids. We can completely
rely on machines (AI systems) only when we can understand, at the best of our
22 R. Chatila et al.

possibilities, and regarding our purposes, the reasons for the behavior observed
or the decision suggested.
What is an ‘explanation’ has already been investigated already by Aristotle
in his Physics, a treatise dating back in the 4th century BC. Today it is urgent
to give a functional meaning, as an interface between people and the algorithms
that suggest decisions, or that decide directly.
Really useful AI systems for decision support, especially in high-stake domain
such as health, job screening and justice, should enhance the awareness and
the autonomy of the human decision maker, so that the ultimate decision is
more informed, free of bias as much as possible, and ultimately ‘better’ than
the decision that the human decision maker would have made without the AI
system, as well as ‘better’ than the automated decision by the AI system alone.
Decision making is essentially a socio-technical system, where a decision
maker interacts with various sources of information and decision support tools,
whose quality should be assessed in term of the final, aggregated outcome - the
quality of the decision - rather than assessing only the quality of the decision
support tool in isolation (e.g., in terms of its predictive accuracy and precision
as a stand-alone tool). To this purpose, rather than purely predictive tools, we
need tools that explain their predictions in meaningful terms, a property that is
rarely matched by the AI tools available in the market today.
Following the same line of reasoning, the AI predictive tools that do not sat-
isfy the explanation requirement should simply not be adopted, also coherently
with the GDPR’s provisions concerning the ‘right of explanation’ (see Articles
13(2)(f), 14(2)(g), and 15(1)(h), which require data controllers to provide data
subjects with information about ‘the existence of automated decision-making,
including profiling and, at least in those cases, meaningful information about the
logic involved, as well as the significance and the envisaged consequences of such
processing for the data subject.’)
There are different roles played within the decision making pipeline, therefore,
it is important to clarify to whom is the explanation interpretable and which
kind of questions can they ask.

– End users: ‘Am I being treated fairly’ ?, ‘Can I contest the decision’ ?, ‘What
could I do differently to get a positive outcome’ ?
– Engineers and data scientists: ‘Is my system working as designed’ ?
– Regulators: ‘Is it compliant’ ?

Essentially, the explanation problem for a decision support system can be


understood as ‘where’ to place a boundary between what algorithmic details
the decision maker can safely ignore and what meaningful information the deci-
sion maker should absolutely know to make an informed decision. Therefore
explanation is intertwined with trustworthiness (what to safely ignore), com-
prehensibility (meaningfulness of the explanations), and accountability (humans
keeping the ultimate responsibility for the decision).
Trustworthy AI 23

4.1 Approaches

The explanation of decision processes is fundamental not only in machine learn-


ing but also in other different AI fields. In robotics, for instance, a verbalization
of a mobile robot can provide a way for the robot to ‘tell’ its experience in a way
that understandable by humans, or a rescue robot can explain its actions through
a decision tree providing human-friendly information. Concerning planning and
scheduling, it is beneficial for the user to have a way to explain reasons for
specific planning so that she can agree or not with the returned plan. The expla-
nations of the decisions of multi-agent systems can provide insights for resolving
conflicts and harmful interactions or for summarizing the strategies adopted by
the agents. On the other hand, knowledge representation and reasoning can help
in providing logical justifications to explanations or augment basic logic with
inference reasoning supplying more actionable explanations. On the same line,
computer vision techniques provide the visualization tools for enhancing expla-
nations that can be easily understood at a glance both for images and for text.
In Machine Learning the problem is articulated in two different forms:

– Black Box eXplanation (BBX), or post-hoc explanation, that given a


black box model aims to reconstruct its logic;
– eXplanation by Design (XbD) that aims to develop a model that it is
explainable on its own.

The most recent works in literature are discussed in the review [23], organiz-
ing them according to the ontology illustrated in the figure below (Fig. 1). Today
we have encouraging results that allow us to reconstruct individual explanations,
answers to questions such as ‘Why wasn’t I chosen for the place I applied for?
What should I change to overturn the decision’ ?

Fig. 1. Open the Black Box Problems. The first distinction concerns XbD and BBx.
The latter can be further divided between Model Explanation, when the goal of expla-
nation is the whole logic of the dark model, Outcome Explanation, when the goal is to
explain decisions about a particular case, and Model Inspection, when the goal is to
understand general properties of the dark model.
24 R. Chatila et al.

Particularly active is the stream on ‘Outcome Explanation’ that focuses on


the local behavior of a black box [23], searching for an explanation of the decision
made for a specific instance. Some of such approaches are model-dependent and
aim, e.g., at explaining the decisions of neural networks by means of saliency
maps, i.e., the portions of the input record (such as the regions of an input
image) that are mainly responsible for the classification outcome [61]. A few
more recent methods are model-agnostic, such as LIME [51]. The main idea is
to derive a local explanation for a decision outcome on a specific instance by
learning an interpretable model from a randomly generated neighborhood of the
instance under investigation, where each instance in the neighborhood is labeled
by querying the black box. An extension of LIME using decision rules (called
Anchors) is presented in [52], which uses a bandit algorithm that randomly
constructs the rules with the highest coverage and precision. Another recent
approach LORE [24] provides local explanations in terms of both factual rules
(why the instance has been classified as such?) and counterfactual rules (what
should change in the instance to obtain a different classification?).

4.2 Open Challenges


To sum up, despite the soaring attention to the topic, the state of the art to date
still exhibits ad-hoc, scattered results, mostly hard-wired with specific models.
A widely applicable, systematic approach with a real impact has not emerged
yet and many interesting and intertwined questions are still to be answered:

– Formalisms for explanations and quantification of comprehensibility: What


are the key features for explanatory AI? Is there a general structure for
explanatory AI? How does an AI system reach a specific decision, and based
on what rationale or reasons does it do so? Formalism for explanations is miss-
ing and still no standards exist to quantify the degree of comprehensibility
of an explanation for humans, this requires interdisciplinary research mixing
with cognitive science, psychology, etc. The challenge is hard, as explanations
should be sound and complete in statistical and causal terms, and yet com-
prehensible to users subject to decisions, the developers of the AI system,
researchers, data scientists and policymakers, authorities and auditors, etc.,
this will require the design social experiments to validate the usefulness of
explanations for different stakeholders and the combined effect of human and
AI decision Systems.
– Generating multi modal explanations: explanations should come as meaning-
ful narratives, and/or expressed clearly and concisely or through visualiza-
tions/summarization or by exemplar/counter-exemplar cases, till to explana-
tion systems capable of supporting human-machine conversation, with two-
way feedback and reinforcement learning. Explanations should reveal the
why, why-not, and what-if. This would require explanations linking the visual
structure of the image or video scenes or of the contained objects with knowl-
edge of the real world expressed with definitions or facts using natural lan-
guage or logic descriptions.
Trustworthy AI 25

– Open the black-box (BBx): at the state of art for text and images the best
learning methods are based on Deep Neural networks, therefore post-hoc
explanators are needed to be coupled with the black-box, capable of achieving
the required quality standards above.
– Transparency by design of hybrid AI algorithms (XbD): the challenge is
twofold: i) to link learnt data models with a priori knowledge that is explic-
itly represented through a knowledge graph or an ontology. It would allow
to relate the extracted features by deep learning inference with definitions
of objects in a knowledge space. Different kinds of hybrid systems should be
investigated, from loose coupling to tight integration of symbolic and numer-
ical models. ii) To re-think Machine Learning as a joint optimization problem
of both accuracy and explainability.

5 Verification
Verification is typically the process of
providing evidence that something that was believed (some fact or hypoth-
esis or theory) is correct.
This can take many forms within computational systems, with a particularly
important variety being formal verification, which can be characterised as

the process of proving or disproving the correctness of a system with respect


to a certain formal specification or property.

Using formal verification allows us to establish key properties of hardware or


software using formal logic, rather than either testing or informal arguments.
This may appear to be an unnecessarily strong step but, while testing is both
widespread and (generally) easy to mechanise, it is important to remember that
testing typically involves selecting a (small) subset of scenarios and assessing
whether the system works within those. In the case of complex, autonomous
systems, it is often impossible to measure how many of the possible scenarios
testing has covered.
Meanwhile, the focus of formal verification is to prove that the system will
work as expected in all scenarios. Yet this comes at a cost, with formal verifi-
cation being expensive (in that it can take significant modelling/design effort),
complex (in that formal verification techniques are often computationally expen-
sive), and restricted (in that, real-world, complex scenarios will require some
abstraction/analysis before verification).
Nevertheless, formal verification is important for safety-critical systems, espe-
cially the key parts of systems where safety aspects are handled [33].

5.1 Issues
As we turn to AI systems, particularly Autonomous Systems that have key
responsibilities we must be sure that we can trust them to act independently.
26 R. Chatila et al.

The concept of ‘trust’ in autonomous systems is quite complex and subjective


[14]. However, the trustworthiness of an autonomous system usually comprises
two key aspects:
1. reliability—will it always work reliably?
2. beneficiality—will it always do what we would like it to?
The first requirement is common among all cyber-physical systems; the second
is especially relevant to autonomous systems. Since autonomous systems must
make their own decisions and take their own actions, then unless we can prescribe
exactly what the system will do in every situation then we must trust it to make
the decisions we would like it to. Clearly, in any non-trivial situation, we cannot
enumerate all possible situations/decisions so we are left trusting that it will
behave as we would want even when not directly under our control.
We here need strong verification techniques for ensuring this second aspect.
If we do not know when, how, and (crucially) why autonomous systems make
their decisions then we will not trust them.

5.2 Approaches
In verifying reliability, there are a wide range of techniques, many of which will
provide probabilistic estimates of the reliability of the software [39]. In verifying
beneficiality, there are far fewer methods. Indeed, what verification method we
can use depends on how decisions are made. Beyond the broad definition of
autonomous systems as “systems that make their own decisions without human
intervention” there are a variety of options.
– Automatic: whereby a sequence of prescribed, activities are fixed in advance.
Here, the decisions are made by the original programmer and so we can carry
out formal verification on the (fixed) code. (Note, however, that these systems
show little flexibility.)
– Learning (trained system): whereby a machine learning system is trained
offline from a set of examples.
Here, the decisions are essentially taken by whoever chose the training set.
Formal verification is very difficult (and often impossible) since, even when
we know the training set, we do not know what attributes of the training
set are important (and what bias was in the training set). Hence the most
common verification approach here is testing.
– Learning (adaptive system): whereby the system’s behaviour evolves through
environmental interactions/feedback.
In systems such as this (reinforcement learning, adaptive systems, etc.), the
decisions are effectively taken by the environment. Since we can never fully
describe any real environment, we are left with either testing or approximation
as verification approaches.
– Fully Autonomous: whereby decisions involve an algorithm based on internal
principles/motivations and (beliefs about) the current situation.
Decisions are made by software, not fixed in advance and not directly driven
Trustworthy AI 27

by the system’s environment or training. Here, rather than verifying all the
decisions the system might make (which we do not know), we can verify the
way that the system makes decisions [10]. At any particular moment, will it
always make the best decision given what it knows about the situation?

5.3 Challenges
What is our real worry about autonomous systems? It is not particularly that
we think they are unreliable [55] but that we are concerned about their intent.
What are they trying to do and why are they doing this? It is here that ‘why’
becomes crucial. In complex environments we cannot predict all the decisions
that must be made (and so cannot pre-code all the ‘correct’ decisions) but we can
ensure that, in making its decisions, an autonomous system will carry them out
“in the right way”. Unless we can strongly verify that autonomous systems will
certainly try to make the right decisions, and make them for the right reasons,
then it is irresponsible to deploy such systems in critical environments.
In summary, if we build our system well (exposing reasons for decisions) and pro-
vide strong verification, then we can make significant steps towards trustworthy
autonomy. If we can expose why a system makes its decisions then:

1. we can verify (prove) that it always makes the appropriate decisions [10];
2. we can help convince the public that the system has “good intentions” [36];
3. we can help convince regulators to allow/certify these systems [15]; and so
4. give engineers the confidence to build more autonomous systems.

6 Human Rights and AI


It is now widely accepted that unless AI systems adhere to ethical standards that
reflect values of fundamental importance to human communities, those systems
would not qualify as trustworthy. Although discussions of ‘AI ethics’ have become
commonplace, there is no agreed set of ethical standards that should govern the
operation of AI, reflected in the variety of ethical standards espoused in various
voluntary ‘AI ethics codes’ that have emerged in recent years. Some values com-
monly appear in these discussions, particularly those of ‘transparency’, ‘fairness’
and ‘explainability’ [17,22] yet the vagueness and elasticity of the scope and con-
tent of ‘AI ethics’ means that it largely operates as an empty vessel into which
anyone (including the tech industry, and the so-called Digital Titans) can pour
their preferred ‘ethical’ content. Without an agreed framework of norms that
clearly identifies and articulates the relevant ethical standards which AI systems
should be expected to comply with, little real progress will be made towards
ensuring that these systems are in practice designed, developed and deployed in
ways that will meet widely accepted ethical standards10 .

10
See also chapters 9 and 10 of this book.
28 R. Chatila et al.

6.1 Why Should Human Rights Provide the Foundational Ethical


Standards for Trustworthy AI?

Elsewhere I have argued that international human rights standards offer the most
promising set of ethical standards for AI, as several civil society organisations
have suggested, for the following reasons11 .
First, as an international governance framework, human rights law is intended
to establish global standards (‘norms’) and mechanisms of accountability that
specify the way in which individuals are entitled to be treated, of which the UN
Universal Declaration of Human Rights (UNHR) 1948 is the most well-known.
Despite considerable variation between regional and national human rights char-
ters, they are all grounded on a shared commitment to uphold the inherent
human dignity of each and every person, in which each individual is regarded
of equal worth, wherever situated [41]. These shared foundations reflect the sta-
tus of human rights standards as basic moral entitlements of every individual in
virtue of their humanity, whether or not those entitlements are backed by legal
protection [12].
Secondly, a commitment to effective human rights protection is a critical
and indispensable requirement of democratic constitutional orders. Given that
AI systems increasingly configure our collective and individual environments,
entitlements and access to, or exclusion from, opportunities and resources, it is
essential that the protection of human rights, alongside respect for the rule of
law and the protection of democracy, is assured to maintain the character of
political communities as constitutional democracies, in which every individual
is free to pursue his or her own version of the good life as far as this is possible
within a framework of peaceful and stable cooperation framework underpinned
by the rule of law [28].
Thirdly, the well-developed institutional framework through which system-
atic attempts are made to monitor, promote and protect adherence to human
rights norms around the world offers a well-established analytical framework
through which tension and conflict between rights, and between rights and col-
lective interests of considerable importance in democratic societies, are resolved

11
See various reports by civil society organisations concerned with securing the
protection of international human rights norms, e.g., [41, 42]. See also the
Toronto Declaration: Protecting the rights to equality and non-discrimination in
machine learning systems (2018) (Available at https://www.accessnow.org/the-
toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-
machine-learning-systems/); The Montreal Declaration for a Responsible Devel-
opment of Artificial Intelligence: A Participatory Process (2017) (Available at
https://nouvelles.umontreal.ca/en/article/2017/11/03/montreal-declaration-for-
a-responsible-development-of-artificial-intelligence/); Access Now (see https://
www.accessnow.org/tag/artificial-intelligence/ for various reports); Data & Society
(see https://datasociety.net/); IEEE’s report on ethically aligned design for AI
(Available at https://ethicsinaction.ieee.org/) which lists as its first principle that
AI design should not infringe international human rights; and the AI Now Report
(2018) (Available at https://ainowinstitute.org/AI Now 2018 Report.pdf).
Trustworthy AI 29

in specific cases through the application of a structured form of reasoned evalu-


ation. This approach is exemplified in the structure and articulation of human
rights norms within the European Convention of Human Rights (the ‘ECHR’)
which specifies a series of human rights norms, including (among others) the
right to freedom of expression, the right to life, the right to private and home
life, the right to freedom of assembly and religion, for example, all of which
must be guaranteed to all individuals and effectively protected. For many of
those rights, certain qualifications are permitted allowing human rights interfer-
ences only to the extent that they are justified in pursuit of a narrow range of
clearly specified purposes that are prescribed by law, necessary in a democratic
society and proportionate in relation to those purposes. This structured frame-
work for the reasoned resolution of conflict arsing between competing rights and
collective interests in specific cases is widely understood by human rights lawyers
and practitioners, forming an essential part of a ‘human rights approach’ and
overcomes another shortcoming in existing codes of ethical conduct: their fail-
ure to acknowledge potential conflict between ethical norms, and the lack of any
guidance concerning how those conflicts will or ought to be resolved in the design
and operation of AI systems.
Fourthly, the well-established human rights approach to the resolution of eth-
ical conflict is informed by, and developed through, a substantial body of author-
itative rulings handed down by judicial institutions (at both international and
national level) responsible for adjudicating human rights complaints. These adju-
dicatory bodies, which determine allegations of human rights violations lodged
by individual complainants, form part of a larger institutional framework that
has developed over time to monitor, promote and protect human rights, and
includes a diverse network of actors in the UN system, other regional human
rights organisations (such as the Council of Europe and a wide range of civil
society organisations focused on the protection of human rights), national courts
and administrative agencies, academics and other human rights advocates. The
institutional framework for rights monitoring, oversight and adjudication pro-
vides a further reason why human rights norms provide the most promising
basis for AI ethics standards. This dynamic and evolving corpus of judicial deci-
sions can help elucidate the scope of justified interferences with particular rights
in concrete cases, offering concrete guidance to those involved in the design,
development and implementation of AI systems concerning what human rights
compliance requires. Most importantly, human rights norms are both interna-
tionally recognised and, in many jurisdictions, supported by law, thereby pro-
viding a set of national and international institutions through which allegations
of human rights violations can be investigated and enforced, and hence offer a
means for real and effective protection.

6.2 How to Ensure that Trustworthy AI Offers Effective Human


Rights Protection?
The need to develop and establish a human-rights centred approach to the gov-
ernance of AI systems springs from recognition that self-regulatory approaches
30 R. Chatila et al.

which rely on voluntary compliance by firms and organisations to ensure that


AI systems comply with ethical standards will not provide adequate and effec-
tive protection. In a highly competitive market, driven by the forces of global
capitalism, commercial firms cannot be relied upon to, in effect, satisfactorily
mark their own ethics homework. Instead, legally mandated external oversight
by an independent regulator with appropriate investigatory and enforcement
powers, which includes opportunities for meaningful stakeholder and public con-
sultation and deliberation, is needed to ensure that human rights protection
is both meaningful and effective. Yet achieving this is no small task. Designing
and implementing a human-rights centred governance framework to secure trust-
worthy AI requires much more foundational work, both to specify the content
and contours of this approach more fully and to render it capable of practical
implementation.
Nevertheless, I believe that the core elements of such an approach can be
identified to ensure that the design, development and deployment of human
rights-compliant AI systems in real world settings. The core elements of an
approach that I have developed with collaborators, which we call ‘human rights-
centred design, deliberation and oversight’, has the potential to ensure that, in
practice, AI systems will be designed, developed and deployed in ways that pro-
vide genuinely ethical AI, with contemporary human rights norms as its core
ethical standards. This governance regime is designed around four principles,
namely (a) design and deliberation (b) assessment, testing and evaluation (c)
independent oversight, investigation and sanction, and (d) traceability, evidence
and proof. Our proposed approach (which we have outlined more fully elsewhere
[62]) draws upon variety of methods and techniques varying widely in their dis-
ciplinary foundations, seeking to integrate both ethical design strategies, tech-
nical tools and techniques for software and system design, verification, testing
and auditing, together with social and organisational approaches to effective
and legitimate governance. Suitably adapted and refined to secure conformity
with human rights norms, these various methodological tools and techniques
could be drawn together in an integrated manner to form the foundations of
a comprehensive design and governance regime. It requires that human rights
norms are systematically considered at every stage of system design, devel-
opment and implementation (making interventions where this is identified as
necessary), drawing upon and adapting technical methods and techniques for
safe software and system design, verification, testing and auditing in order to
ensure compliance with human rights norms, together with social and organi-
sational approaches to effective and legitimate regulatory governance (including
meta-regulatory risk management and impact assessment methodologies and
post-implementation vigilance). Such a regime must be mandated by law, and
relies critically on external oversight by independent, competent and properly
resourced regulatory authorities with appropriate powers of investigation and
enforcement, requiring input from both technical and human rights experts, on
the one hand, and meaningful input and deliberation from affected stakeholders
and the general public on the other.
Trustworthy AI 31

6.3 Open Challenges

Much more theoretical and applied research is required to flesh out the details
of our proposed approach, generating multiple lines of inquiry that must be pur-
sued to develop the technical and organisational methods and systems that will
be needed, based on the adaptation of existing engineering and regulatory tech-
niques aimed at ensuring safe system design, re-configuring and extending these
approaches to secure compliance with a much wider and more complex set of
human rights norms. It will require identifying and reconfiguring many aspects of
software engineering (SE) practice to support meaningful human rights evalua-
tion and compliance, complemented by a focused human rights-centred interdis-
ciplinary research and design agenda. To fulfil this vision of human-rights centred
design, deliberation and oversight necessary to secure trustworthy AI, several
serious challenges must first overcome - at the disciplinary level, the organi-
sational level, the industry level, and the policy-making level ‘none of which
will be easily achieved. Furthermore, because human rights are often highly
abstract in nature and lacking sharply delineated boundaries given their capac-
ity to adapt and evolve in response to their dynamic socio-technical context,
there may well be only so much that software and system design and implemen-
tation techniques can achieve in attempting to transpose human rights norms
and commitments into the structure and operation of AI systems in real world
settings. Nor can a human-rights centred approach ensure the protection of all
ethical values adversely implicated by AI, given that human rights norms do not
comprehensively cover all values of societal concern. Rather, our proposal for
the human-rights centred governance of AI systems constitutes only one impor-
tant element in the overall socio-political landscape needed to build a future in
which AI systems are compatible with liberal democratic political communities
in which respect for human rights and the rule of law lie at its bedrock12 . In
other words, human-rights norms provide a critical starting point in our quest to
develop genuinely trustworthy AI, the importance of which is difficult to under-
estimate. As the UN Secretary General High-Level Panel on Digital Cooperation
(2019) has stated:

“There is an urgent need to examine how time-honoured human rights frame-


works and conventions and the obligations that flow from those commitments can
guide actions and policies relating to digital cooperation and digital technology”.

7 Beneficial AI

Artificial intelligence is currently experiencing a surge of research investment and


technological progress. Tasks that seemed far off a decade ago—such as defeating
human Go champions, driving cars safely in urban settings, and translating
accurately among dozens of languages—are now largely solved.

12
see also chapter 9 of this book.
32 R. Chatila et al.

Although there are several remaining obstacles that require breakthroughs


in basic research, it seems reasonable to expect that AI will eventually reach
and then exceed its long-standing objective of general-purpose, human-level AI.
Indeed, the great majority of active AI researchers polled on this question are
quite confident that this will happen during this century, with many putting
the likely date much earlier [20]. Moreover, Critch and Krueger [8] argue that
other characteristics of AI systems—such as speed, replication, and direct Inter-
net contact with billions of humans—mean that thresholds of concern could be
crossed long before general-purpose human-level AI is achieved.
The question of what happens when AI succeeds in its quest for true machine
intelligence is seldom considered. Alan Turing [59] was not optimistic:
“It seems probable that once the machine thinking method had started,
it would not take long to outstrip our feeble powers. . . . At some stage
therefore we should have to expect the machines to take control.”

This is the problem of control: if we create machines more powerful than


ourselves, how do we retain power over them forever? Conversely, and perhaps
more positively, we would like to create provably beneficial AI: AI systems that
are guaranteed to be of benefit to humans, no matter how capable they become.
This is the essence of trustworthy AI: we trust a machine if and only if we have
good reason to believe it will act in ways beneficial to us.

7.1 AI in the Standard Model


To solve the control problem, it helps to understand why we humans might lose
control—why it is that making AI better and better could lead to the worst
possible outcome.
The difficulty has its origins in the way we have defined and pursued AI since
the beginning. As computers emerged and AI became a possibility in the 1940s
and 1950s, it was natural to define AI as a machine version of human intelligence.
And human intelligence, in turn, was increasingly associated with the formal
definitions of rationality proposed by Ramsey [50] and by von Neumann and
Morgenstern [47]. In this view, roughly speaking,
Humans are intelligent to the extent that our actions can be expected to
achieve our objectives.

(In truth, this view of intelligence, expressed non-mathematically, can be traced


easily back to Aristotle and other ancient sources.) The natural translation to
machines looks like this:
Machines are intelligent to the extent that their actions can be expected
to achieve their objectives.

As machines, unlike humans, do not come with objectives, those are supplied
exogenously, by us. So we create optimizing machinery, plug in the objectives,
and off it goes.
Trustworthy AI 33

I will call this the standard model for AI. It is instantiated in slightly differ-
ent ways in different subfields of AI. For example, problem-solving and planning
algorithms (depth-first search, A∗ , SATPlan, etc.) aim to find least-cost action
sequences that achieve a logically defined goal; game-playing algorithms max-
imize the probability of winning the game; MDP (Markov Decision Process)
solvers and reinforcement learning algorithms find policies that maximize the
expected discounted sum of rewards; supervised learning algorithms minimize a
loss function. The same basic model holds in control theory (minimizing cost),
operations research (maximizing reward), statistics (minimizing loss), and eco-
nomics (maximizing utility, GDP, or discounted quarterly profit streams).
Unfortunately, the standard model fails when we supply objectives that are
incomplete or incorrect. We have known this for a long time. For example, King
Midas specified his objective—that everything he touch turn to gold—and found
out too late that this included his food, drink, and family members. Many cul-
tures have some variant of the genie who grants three wishes; in these stories, the
third wish is usually to undo the first two wishes. In economics, this is the prob-
lem of externalities, where (for example) a corporation pursuing profit renders
the Earth uninhabitable as a side effect.
Until recently, AI systems operated largely in the laboratory and in toy,
simulated environments. Errors in defining objectives were plentiful [38], some
of them highly amusing, but in all cases researchers could simply reset the system
and try again. Now, however, AI systems operate in the real world, interacting
directly with billions of people. For example, content selection algorithms in
social media determine what a significant fraction of all human beings read
and watch for many hours per day. Initial designs for these algorithms specified
an objective to maximize some measure of click-through or engagement. Fairly
soon, the social media companies realized the corrosive effects of maximizing
such objectives, but fixing the problem has turned out to be very difficult.
Content selection algorithms in social media are very simple learning algo-
rithms that typically represent content as feature vectors and humans as
sequences of clicks and non-clicks. Clearly, more sophisticated and capable algo-
rithms could wreak far more havoc. This is an instance of a general principle [48]:
with misspecified objectives, the better the AI, the worse the outcome. An AI
system pursuing an incorrect objective is by definition in conflict with humanity.

7.2 AI in the New Model: Assistance Games

The mistake in the standard model is the assumption that we humans can supply
a complete and correct definition of our true preferences to the machine. From
the machine’s point of view, this amounts to the assumption that the objective
it is pursuing is exactly the right one. We can avoid this problem by defining the
goals of AI in a slightly different way [53]:

Machines are beneficial to the extent that their actions can be expected to
achieve our objectives.
Other documents randomly have
different content
The Project Gutenberg eBook of Vengeance
From the Past
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Vengeance From the Past

Author: Robert W. Krepps

Illustrator: W. E. Terry

Release date: October 1, 2021 [eBook #66438]

Language: English

Credits: Greg Weeks, Mary Meehan and the Online Distributed


Proofreading Team at http://www.pgdp.net

*** START OF THE PROJECT GUTENBERG EBOOK VENGEANCE


FROM THE PAST ***
Ray Rollins fought to preserve the Space
Station—and Earth—from an enemy mankind had
forgotten. An enemy in hiding, awaiting its—

Vengeance From The Past!


By Geoff St. Reynard

[Transcriber's Note: This etext was produced from


Imagination Stories of Science and Fantasy
September 1954
Extensive research did not uncover any evidence that
the U.S. copyright on this publication was renewed.]
It started during the program. The little noises were there but I
didn't pay any attention to them, and I don't know now whether I
thought they were the wind and the rain or maybe some realistic
sound effects on tv. Of course they were the small sounds made by
the two things that wanted to get into my house. They tried the
doors, turning the knobs and pressing their bodies against the
panels, and then they prowled around testing the windows. They
were as silent as cobras but windows pushed or doors shoved will
make some noise and so the little creaks were there but I paid no
attention to them.
Then I got the feeling that someone was looking at me.
Nuts. My background as a fiction writer was getting under my skin.
Someone watching me, my God, from where? The French windows
behind me? Who'd be out in this downpour? I was glad my wife
Nessa was asleep upstairs. With a baby on the way she needed her
rest. Just to ease my rippling spine, I'd give a quick glance over my
shoulder.
I did.
I saw a face like a gigantic mask. Enormous skull, low brow, small
chin and thick-lipped mouth; wide cheeks and a mass of tumbled
gray hair crowning the hatless head. Suggestion of a body like a
gorilla's clad in dark broadcloth. Hands pressed flat on the glass,
short thumbs and long fingers thick as country sausages. Worst of
all the ghastly thing, two thinned eyes that caught the light of the tv
lamp and shot it back at me as glowing crimson oblongs of animal
hate. This creature, standing rock-steady beyond the full-length
windows that were streamed and blurry with the driving rain, this
beast, this—
I closed my eyes tight and then opened them. It was gone into the
rain, an optical illusion! It had really spooked me there for an
instant, the old marrow was still cold from the first grisly shock.
I turned and started watching the set again. I started to chuckle to
myself. I heard the French windows snap and groan a little with the
wind. Then I heard the fretful sound of a strained and snapping bolt.
That wasn't the wind! I jumped to my feet and whirled around. I
froze where I stood. A hulking brute with a mask for a face was
coming for me and then I saw the face was a face and not a mask at
all.
Another man behind the horror said sharply, "Don't touch him, Old
One!" and those paws with the sausage fingers fell reluctantly. I
backed up two steps and the tv set held me from going any further.
The second intruder passed the horror and thrust out his hand,
which was about as big as a hand can be without becoming an
outsize foot; it took me a moment to realize that he meant me to
shake it. When I didn't move, he grinned and said in his deep voice,
"Don't know me, Ray?" and then I did know him. I was happier not
remembering him, I wished I could stop knowing who he was, but
now I did and I knew I was likely going to be dead before sunup,
because he was Bill Cuff.
I did shake hands with him. I'm five-feet-ten and weigh one-sixty
and I'm about as rugged as the average guy, or more so, because I
play handball and used to be a pro footballer before I got married;
but if I'd angered Bill Cuff he might have picked me up and torn me
into little scraps like a piece of bond paper. He was the strongest
man I ever knew. And for a couple of years he'd been badly wanted
by the police, because he had murdered at least a dozen people. I
shook hands with him. I didn't like it but I wasn't going to pander to
my preferences just then.
"Sit down, Ray," Bill said, as if it had been his house. "Sit down, Old
One." This to his companion.
The thing with the face sat on the floor, folding down without effort
till his hams rested on his heels. I sat on the couch. Bill Cuff walked
up and down the room. He kept his voice pitched low as he talked
and I knew that Nessa wouldn't hear a thing if she happened to be
awake. I watched Cuff. He moved back and forth like a great
panther brooding in its cage and planning an escape. There was
something so easy in those movements of leg and body that the
effect wasn't altogether human. Which wasn't surprising, in view of
what he proceeded to tell me....

CHAPTER II

"You remember, Ray, the week I disappeared? You remember how I


killed the two museum guards and the three cops, and afterwards
the eight or ten searchers who were pursuing me through the
swamp? It made headlines all over this country and the rest of the
world too. Jack the Ripper had a grandson. Bill Cuff the mad
berserker was unleashed on the world, breaking men's backs and
twisting their heads in a nightmare of murder. Where would he strike
next?
"And then I didn't strike, and they said I must be dead, drowned in
the swamplands.
"I wasn't dead: obviously. I'd been discovered by a muster of the
Old Companions, and was living in their HQ, an ancient wooden
house in the center of the swamps. I was learning the history of my
race, and the plans that it had for its future.
"My race, yes....
"Ray we are the Neanderthals...."
I didn't laugh at him, hearing Bill Cuff say that so soberly. I couldn't.
Not with the thing sitting on the floor watching me; the thing that
had stepped right out of a museum reconstruction of the Stone Age!
Cuff went on talking.
"My memories came at me in a flood, remembrances of the dawn of
time. I fled in retrospect from the encroachments of Man, he who
was a little like me but so vastly different; Man who gradually,
painstakingly wiped out my breed. Or so he thought. He forgot the
matings, the myriad couplings of Neanderthal bucks with human
women. He forgot that dark blood runs stronger than light, that the
bestial is stronger than the civilized, that a drop of wolf-blood will
often make a dog a ravening brute, that one small dilution of
Neanderthal carries down through years and centuries to crop up
again, full-fledged and vigorous, time after time in an otherwise
placid strain.
"The Neanderthal died, but his seed was carried in the bodies of
Homo sapiens, and after a period cropped out in violent flowering as
the Pict. Luck brought out the great strain in force, and banding
together in the isles, we were a race apart once more. Then time
conquered us a second season; the Picts were vanquished and their
pitiful remnants bred once more into the watery outlander life-form,
that of Man.
"Then in later ages we discovered ourselves as different, but never
could make of ourselves a dominant race: so we were hunted in
ones and twos, and when our ancient blood cried for vengeance on
Man, we slew him and died alone. We were the so-called
werewolves and the vampires, the ghouls, the ogres, the incubi and
succubi, the Good Folk and changelings and devils of the woods. We
who always fought Man, unknowing what we were or why we
fought, we formed the basis of every legend that told of horrible
alien things lying in wait beside every path and in every fen and bog
and desolute place.
"In the eighteenth century we were the raging madmen of Bedlam.
"Late in the nineteenth, science unwittingly came to our aid. The
Neanderthal man emerged from dry bones as a beast, a manlike
animal who had fallen to make way for Homo sapiens. And gradually
those of us who had the dawn brain, the remembrance of glories far
past, realized that we were not mad, but poor deluded men who
thought ourselves different—we were different. We were the
descendants and inheritors of the Neanderthal, he who came before
man and was in many ways better, stronger, more savagely
intelligent and possessed of much higher capabilities. We were not
men, and the time was coming when we would no longer need to
masquerade as men. We were coming into our inheritance!"
Bill Cuff halted in front of me and his face, broad, heavy-boned,
topped with thick black hair and carrying an expression of cruel and
truculent power, now lit up with malignant glee. I felt a cold chill.
"And all this I remembered in a space of two days!
"What I remembered best was the hate.
"We hated you—oh, God, how we hated! Imagine the hate you'd
feel toward a race from Mars that came and overran your planet and
stamped out your folk till only a pitiful handful were left. Man had
come and usurped our earth, hadn't he? So the blood remembered,
and hated."
Bill Cuff laughed suddenly.
"Ray, I'm not mad, as you were just thinking. I offer you that as
proof: we are to a degree telepathic. All of us. Yet men are not.
"It's true. We are the Neanderthals. We are not human. And we
have returned to take back our inheritance, which is the world!"

CHAPTER III

He allowed me to sit without speaking for the space of about ten


minutes. I needed that time. I had to go all over what he'd said,
consider each statement, try to forget that it sounded like fantasy,
try to realize that Bill Cuff and Lord knew how many others of the
so-called Old Companions believed this yarn with their whole
energies. I had to take the tale and consider it in its entirety, as a
broad concept which might be true, and then I had to grit my teeth
and look at the significance of it as if by some incredible, wild
chance it were true....
The significance was horrible, of course, but it was doubly or rather
trebly awful for me personally, because Bill Cuff was my cousin.
His father, who'd died before Bill was born, had been my mother's
brother.
And the reason I say it was trebly bad for me was that upstairs my
wife Nessa lay asleep, and stirring in her was our child.
And if Bill Cuff was right, then that child and I myself came of a race
that was only partly human; and neither of us could call ourselves by
the proud title of Man.
At the end of ten minutes, the creature called Old One roused
himself and gave a grunt. It seemed to be a two-syllable word, but
of no language I ever knew.
Bill Cuff nodded and replied, "Yes he does, Old One," showing that it
had actually conveyed meaning. I looked again at that ferocious
mask, and I think I began believing Bill Cuff's story with an
intelligent awareness of its truth, right them. Old One was a
Neanderthal. Only a blind idiot could have doubted it.
"Now here's the reason I've come here to tell you this," began Bill
Cuff, and I waved a hand to stop him.
"I know why," I said huskily. "We're cousins. You think the same
blood may run in my veins."
"It does without a doubt. You see, I've checked on my mother, who's
still living; and she isn't a carrier. So it was my father—your uncle.
And you may not have the memory, Ray, but you have the blood.
You're Neanderthal too."
"So you want me to come out to the swamps and join you?"
Bill Cuff flung himself onto the couch beside me, leaning near,
breathing into my face. His breath smelled like raw meat, or maybe
it was my imagination. He said, his voice a rumbling growl, "No, that
isn't why I came. I want to find Howard. And I think you know
where he is."
My belly contracted and my palms that were already damp became
clammy.
I got up and paced the room nervously. My brain was clanking and
buzzing in a kind of scrambled gear.

Howard Rollins was my brother. He was a scientist, a top-flight brain;


serious where I'm flippant, keen where I'm fuzzy, and high-IQed
where I'm sort of upper-middle-minded. He'd been working for the
government since the establishment of Oak Ridge. Right at that
moment he was on a small heavily forested scrap of land off the
Maine coast, a bit of wind-swept earth called Odo Island. I knew
what he was doing and it was as important as the atom bomb, or
maybe even more so. I knew these things because Howard trusted
me. I said to Bill Cuff, "He's on Pompey Island."
Cuff's gray eyes glinted. I noticed now that Old One's eyes were
exactly the same color. "Cachug," said Cuff, or some damn fool grunt
that sounded like it, and Old One got up and went out of the French
windows into the wind and rain, lurching like a clothed gorilla. Then
my cousin turned to me once more.
"We know what he's doing, Ray; but we couldn't find out where he
was doing it. We have Old Companions in the government, but none
who were placed in your position, who'd know where Howard was
despite the heavy curtain of secrecy. So I had to risk coming into the
city to see you." He seemed to listen then, to sounds which I
couldn't hear. He grinned. "Now," he said, "how soon can you wind
up your affairs for, say, a week?"
"Right now," I said, almost without thinking. "I have six scripts
completed—"
"Then you'll meet us in Boston tomorrow afternoon—five sharp
beside the City Hall on School Street."
"Wait a minute," I protested. "What—"
"We'll explain everything then. Don't worry, Ray. You deal fairly with
us and we'll deal more than fairly by you. If you're telling me the
truth, if you play ball, you'll be the first member of the Old
Companions accepted in spite of lack of dawn memory. A proud
thing," he said, drawing himself up to his impressive full height, "a
very proud thing, Ray." The flame of a fanatic shone in the gray
eyes, and then he had turned and was gone and I was staring at the
dead tv set and licking my lips that were dry as tomb-dust.
When I was sure they had both gone, I crossed to the French
windows and secured them with a chair, and then I went to the
phone. I had to call the police right away, of course; I was believing
the mad Neanderthal story, but I knew that the light of morning
might force me to discredit it; nevertheless, Bill Cuff the multiple
murderer had been here, and the cops would have to know. Thank
God I'd given my cousin the wrong address for Howard! I picked up
the phone and started to dial the police.
To this day I don't know why I racked the phone before I'd finished
dialing. Some hunch, I don't know what it was. I stood there in the
diffused radiance of the tv lamp, still trembling from my recent
interview with that ripper and his apeman sidekick, and for a few
minutes I didn't do anything but breathe heavily, and then I turned
and raced up the stairs.
Not until I saw the empty bed, the blanket and sheet on the floor,
the open window, not till then did I face the fact that Bill Cuff would
never have left me without taking along a hostage.
Nessa was gone!

CHAPTER IV
I caught the seven a.m. train for Boston. I hadn't slept or even lain
down all night. The sole conclusion I'd come to was that I didn't
dare ask for help in this job, not yet at any rate. I would be
jeopardizing Nessa's life.
I had thought of the police. But they'd had two years to find Bill Cuff
and failed. One hint that they were looking for him, and he with his
crazy Old Companions would stamp out my wife's life as off-
handedly as I'd squash a beetle. I'm a law-abiding citizen and I
respect the enforcers of the law; but this was a special case. I'd
done my civic duty other times, but now I was on a one-man
crusade. I had to save Nessa. If I could chop down Cuff, well and
good. But Nessa came first.
As the train shot along through countryside scattered with dying
autumn foliage, swept with intermittent rains, I thought of my
brother Howard and his work. On Odo Island he and six other top-
grade brains were creating a space station for the United States—a
man-made moon, the first jump to the stars—and equally important,
a lookout post from which we could keep tabs on all of Earth.
A lot of the heavy forest on Odo was false; it couldn't be detected
from the air, and the formation of the island prevented its being seen
from the sea, but plenty of that green was only a big canopy
shielding the small air field on which a great wheel-shaped space
station had already been put together. 237 feet across, it would in
the near future be carried off the earth, towed by the enormous
three-stage rockets which were already waiting in hiding along the
eastern coast of the States. One thousand miles up—one thousand
plus—it would then become a satellite of Terra.
Odo was guarded by its coast, a real rock-bound wreckers' paradise,
and by six brace of anti-aircraft guns. There were forty Marines
based there, six scientists, and eighty-odd workmen. Everyone had
been screened back to his grandparents, and evidently none of the
Old Companions had been able to worm in, since Bill Cuff hadn't
known where the artificial moon was being constructed.
Pompey Island was about twelve miles to the south of Odo. There
wasn't anything on it but trees and the only chuckle I could muster
during that whole train ride was at the picture of Bill Cuff at the
head of a hundred Neanderthal men (all clad in mammoth skins and
carrying stone-headed clubs) landing on Pompey and roaring over it
in search of my brother and his metal moon.
I had no idea why I was to meet Cuff in Boston. For all I knew,
Nessa might be held in New York, in Alabama, or in Evanston,
Illinois. But I had to go to Boston, because I had no other lead
whatever. I couldn't form plans because I was so totally in the dark.
I just had to do what I could. And I had to be ready to think like
lightning when I did meet Cuff and find out what was happening.
Just as we drew into the station, I used an old writer's trick: I
swallowed a couple of dexedrine tablets so that for a few hours my
fatigue would lie down and I'd have a kind of false vigor of intellect
and muscles. I'd be mighty tired by morning, but for now I'd be at
peak. I got off and took a taxi to a hotel near School Street. I
bathed and shaved and checked my automatic and the extra clips in
my jacket; then I ate an early supper and walked over to City Hall.

On the nose of five o'clock a gray car drew up and one of the men in
the back seat rolled down the window and gestured me over. I got in
beside the driver and we moved away into the traffic. Nobody said
anything until we had left Boston behind and were almost into Lynn.
Then Bill Cuff said from the back seat, "You seem pretty calm, Ray,"
and laughed. "That's the blood," he said admiringly. "That's the dark
blood. A man would be fizzing and twitching and babbling his head
off."
I had determined not to think any further than the rescue of Nessa.
I wasn't going to bog down in speculations as to my humanness, or
the truth of this whole theory of Cuff's; but even so, the chills
chased over me when he said man like that. Wasn't I altogether
human? Would I, too, eventually experience the dawn brain's
awakening, the revulsion against humanity, the reversion to pre-
historic emotion?
I said as casually as possible, "Seems you don't trust the dark blood
any further than you could spit it, Bill."
"Not in you, not yet. I'm sorry about Nessa. She was a sensible
precaution. You wouldn't think much of my wits if I hadn't taken
her."
"Where is she?" I held my breath tensely.
"You'll see her at the end of the trip."
"And when's that?" My breathing relaxed a trifle.
"Few hours."
"He wants to know too much," said the driver. I looked over at him.
He was a thick, short, shallow-templed fellow, gray of eye and
straight of thin-lipped mouth. He had ears like a baby elephant's
long unkempt hair draping over them. I could smell his breath three
feet away.
"Shut up, Trutch," said Bill Cuff impatiently. "He's my cousin."
"But has he the dawn brain? Are you sure he—"
"Shut up. Just shut up," said Bill, and his voice was like that of a
maniac holding himself in with a terrible effort.
"I don't think you ought to tell him things like—" persisted Trutch,
and then Bill Cuff had leaned forward and given him a hell of a
wallop on the side of the head with his open palm. The driver jerked
forward and grunted and then he was quiet, as the car lurched and
recovered. We were doing fifty. Cuff said, "Shut up! When I tell you
that, do it!"
There were two other men in the back. One of them growled, "Easy,
Bill. We live by the primal rage, but you must control it."
I turned and put my arm across the back of the seat and looked at
the man who had spoken. He was another of the short and stocky
breed. His eyes were snapping gray gems in a face as tan as a boot.
He had more hair piled on top of his long skull than I ever saw on
anyone but a movie actor: it was bright yellow, not gold but sulphur
yellow, and slicked with oil. His features were broad and at the same
time vulpine, the thickened muzzle of a fox. I had meant only to
glance at each of them in turn, but my gaze was held by this Old
Companion. His expression was good-humored and yet he radiated
evil, an old, old wickedness commingled with piercing intelligence.
When at last I managed to tear my eyes from him, I knew that this
was the worst of my enemies. I could not have defended that by
logic, but neither could I have been argued out of it. I would have
faced five giant Bill Cuffs rather than this yellow-haired creature.
"My name is Skagarach," he said to me, bringing my eyes back to
him involuntarily. "I am third leader in our muster of the Old
Companions. You have met the second leader, Old One. That is the
truth of our folk. In time, in generations, we shall all look so, and the
effete refinements of Homo sapiens will be gone." He glanced at Bill
Cuff, who towered beside him, watching me. "Bill is first leader. In
two years he has become so. He killed nineteen of us to gain that
leadership." Skagarach smiled, cunningly and drily. I gathered that
he was not fond of my cousin. And that was my first piece of real
hope.
"The man at the wheel," he went on, "is called Trutch. As far as I
know he has no other name. The fourth is Vance." This last was a
young fellow, about as wide as he was high, with the usual gray
eyes.
"Are the eyes a distinguishing characteristic?" I asked.
"Some ninety per cent of us have them. You do yourself. But every
gray-eyed man is not Homo-Neanderthal by any means."
"How do you—we—tell each other apart from men?"
"Actions: Cuff killed insanely, from a human viewpoint, that is, and
then answered our telepathic call. Occasionally we have only actions,
not mental communication, to judge by, and then we find the one
who has gone berserk and test him. Sometimes the dawn brain
returns to an Old Companion without the gift of telepathy."
"Suppose I were to say that I remembered being a caveman. How
would you test that?"
Skagarach and Bill Cuff grinned. The other two seemed without
humor. "Go ahead, tell us what you remember," said my cousin.
"I don't—but suppose I say, I remember hunting a mammoth...."
"You would be lying. You'd recall other things—mating with human
women, being stalked to your death, fighting the upstart Man. You
would have flashes of other centuries, of being named werewolf,
vampire, hobgoblin, ogre, bugbear and demon. Always the violence,
the antagonism to man, the slaying and being slain. Not the
common everyday life, but the high and savage points."
"I see. You give me a swell opportunity to lie to you," I told him
candidly. I had nothing to lose, for I wouldn't bother lying. I had a
hunch it wouldn't do me any good in this swift job I had to do.
"There are other checks on you," said Skagarach. He leaned forward
suddenly. "Truthfully—do you have stirrings when I say those things?
Does your brain murmur the least surprise of faintest recognition?"
"Truthfully," I said, "no."
"Never mind," said he, sitting back again. "It took me 17 years to
develop the memory fully. Others are given it by a knock on the
head, or even, as Cuff here, gain it full-blown in a few days with no
stimulus from outside. You be patient, Ray. It will come."
And when it does, if it does, I thought, I hope I have the strength to
kill myself before I stop being a man and turn into one of these pre-
historic horrors!
Then I remembered that they claimed telepathic powers. I glanced
from one to another. Either my sudden thought hadn't reached
them, or they hadn't minded its implications. I said tentatively, "Can
you read the thoughts of other men?"
"Men, not other men," said Trutch viciously.
"Yes," said Skagarach.
Now I had spent a good many years around actors, and damned
good ones at that. This Skagarach was an actor from the word go,
but I believed that I was a better one. So I said carelessly, "Can you
tell what I'm thinking?" and allowed my face to assume the tiniest
lines of worry, the smallest indications of fear possible to the facial
muscles. Skagarach said immediately, "You're fretting over your
wife."
It was a good guess. He knew his book of reactions and signs inside
and out. The only trouble was that I had at that moment been
concentrating intently on a chocolate milk shake and a cheeseburger.
I had even been saying the words over in my mind. So I knew that
he had been trying to convince me of the truth of a lie, and that was
another flake of hope for me.
It was a good thing for me that I had those few minute hopes. They
were all I had.

CHAPTER V

In the late dusk of evening the car pulled off the road and rattled
over a field full of boulders and stopped at the top of a high cliff
overlooking the sea. We all got out and stretched our cramped legs.
Bill Cuff walked along the edge of the foreland until he came to a
trace of path. He called to us and we followed him down the nearly-
sheer face of the promontory, myself trying not to look at the dark
foam spattered sea so far beneath our feet.
At the base of the promontory was a beach. It had looked tiny from
above; I found that it was large, for the ocean had long ago
hollowed out a great cavelike place in the rock, and the beach ran
back under the land for several hundred feet. There were dim blue
searchlights set up at intervals, which would not have been seen
from any distance; no ship would come closer than a mile to the
coast here, and so the presence of Old Companions in the cavern
would be kept secret.
Old Companions....
Great God! What a horde swarmed in that hidden hole, across that
rock-canopied beach! There were about two hundred of them. The
majority were duplicates, in breadth of frame and depth of chest, of
Trutch and Vance. The faces were handsome or ugly, grotesque or
plain, yet all held the concentrated savagery of my four escorts.
Many had arms longer than normal. Some were so deformed that
their gait as they crossed the sand on various errands was almost
that of an ape that swings along on its knuckles. Again, several were
tall and personable, like Bill Cuff.
They were all dressed darkly, in gray broadcloth or black wool
jackets, crepe-soled shoes, no ties and no hats evident. Some of
them were carrying things—submachine guns, handguns, even hand
grenades—from broken crates to the six big boats that lined the
water's edge. Others were giving orders in voices that were almost
without exception gruff and barking. And everywhere I looked I
caught the stare of gray eyes: eyes that took the blue glow of the
searchlights and threw it back condensed and changed, so that from
many dark faces there gleamed at me thin ovals of orange and
crimson and green luminescence.
Now I knew for sure that the tale of the recrudescent apemen was
no fable. Now the focused animal hatred of this pack washed over
me like an unclean sea-wave full of crawling horrors and I realized
fully and beyond a doubt that Bill Cuff's story was true, and that
here in this cavern might well be the start of the finish of the human
race.
"Where's Nessa?" I asked Skagarach. I spoke to him rather than to
my cousin because I had a plan and this could well be the start of it.
"She's back there, I suppose," he said, gesturing to the rear of the
beach. "First come and see the boats." He led me toward the
dockless rim of the sea, and Bill Cuff came after us, glowering at
him. I'd presumed he would hate any assumption of authority on
Skagarach's part. The thing they called the primal rage bubbled near
the surface in Bill Cuff.

The boats were very like LCPs, with big bow ports closed by movable
ramps. Skagarach said, "Yes, very like LCPs," which of course was
not mind-reading, but intelligent guessing of my first thought. "We
ground them on the beach, then they can be backed off easily,
because of their specially designed propellors and rudders. The
power comes from a reactor operating with thermal neutrons, and
late refinements have made it almost wholly silent. This is the
perfect transportation for us."
"To Pompey Island, naturally," I said.
"Naturally," said Bill Cuff in a surly tone. "We're going to pay Howard
a visit."
"But what good will that do?"
"Don't be a burbling, maundering, congenital idiot, Ray," said Bill
irritably. "That space station is the answer for us. With it we'll
command the world."
"But how will you get it into the sky?"
"The same way the men were going to do it. Tow it with three stage
rockets." He relaxed his expression of potential murder, and gripped
me by the shoulder. His hand was like a bear trap. "There are
musters of the Old Companions lying in wait near every rocket
station on the seaboard. As soon as we've secured possession of the
space station, they'll know it; and within fifteen minutes the rockets
will be on the way to Pompey."
"Oh, wait a minute," I said. I was consumed with impatience to see
Nessa, but the sheer incredibility of this plot had to be coped with
now. These men were stark crazy.... "If I dared to write up a yarn in
which three-stage rockets were flown to an island and from there
into the sky with a 237-foot-broad space station, my publisher would
slit my throat with a rolled-up contract! Vampires are easier to
believe than a wacked thing like that."
"Ray," said Bill Cuff, and suddenly from the growl in his voice I
realized that I had been taking liberties with a savage cave-brute,
"Ray, do we seem like fumblers to you?"
"No," I said.
"How do you think the men were going to do it?"
"I don't know, but I presumed they'd dismantle the station, after
testing it, and tow it in parts into space, where they'd reassemble it."
"Dead wrong. They were going to carry it to the thousand-mile mark
by three-stage rockets, yes; but as a whole, not in parts."
"I didn't think it could be done."
"It can with the rockets they have. There've been improvements
since you read about rocketry last, Ray." Cuff looked superior. As if
he'd had something to do with the improvements, instead of
squatting somewhere in a swamp. "And that isn't all. Those rockets
are going to be towed themselves—from their bases to the site of
the man-made moon—by smaller vehicles built on the principles of
the VTO planes."
VTO—Vertical Take Off. Yes, it was remotely conceivable....
"But all this thud-and-blunder business," I protested, turning to
Skagarach. "You're dealing with the highest product of man. And you
figure to take it over by a series of ambushes, wild attacks in the
night, and in general the heavy hand of the apeman. It's straight out
of a nut hatch."
Then Bill Cuff hit me. I saw the swing coming, and the trunklike arm
sweeping round and up with a fist like a boulder on the end of it,
and I started to duck, and then the mountain collapsed on my skull
and the blue lights went out, wham!

CHAPTER VI

I came gradually out of a scarlet fog into a jet-black well. My head,


which was aching abominably, was pillowed on something soft and
warm and slightly moving. I heard mutters of guttural voices, the
slap of waves on metal. I licked my dry lips and tasted salt. Blood?
No, ocean salt. We were at sea. I was a little chilly. I shivered, tried
to see something, and made out the dim figure of a person above
me. The sky was moonless and inky. I was lying with my head in this
person's lap. I breathed deep and said quietly, "Nessa?"
"Yes, Ray."
I didn't have words. I reached up and touched her face with my
fingers, and she bent and we kissed. "You okay?" I said then.
"I'm okay," said Nessa. That was all. For now, that was enough.
"Anybody near us?" I looked up at her tense face.
"I am," said Skagarach. He moved into my vision, and I sat up, head
pounding, and stared at him until I could make out his foxy features.
"I'm sorry," he said under his breath. "Cuff is on the primitive side.
So are we all ... but there ought to be limits. There was no sense in
hitting you."
"I don't get it," I said. "Why is that big murder-machine the first
leader, and not you, Skagarach?"
"Ah," he said. "Ah, yes. Some of us wonder about that too." For all
his obvious intelligence, he was a sucker for a one-two compliment
to the jaw.
"That was an awful belt he gave me," I said. Something had just
occurred to me. "It kind of addled my brains. Lord, I'd like to hit him
back for that!"
"Ray?" said Nessa uncertainly. She knew me for a strictly non-
aggressive joe since I'd quit football.
"I feel—I feel furious," I said, and I hissed it low and aimed it at
Skagarach. "I never had so much yearning to pulverize someone."
Skagarach leaned over and peered into my eyes. "Don't sit on it," he
said. "Let it fume, let it rage. It may well be the primal anger. Let it
have its way. Only—I don't suggest you hit Cuff."
"Not with my fists, anyway," I agreed. "Maybe with a gun butt."
"Let the rage bubble," he said, laughing almost without sound.
"You'll do, Ray Rollins; I believe you'll do." He sat down, staring
ahead.
I found Nessa's hand and squeezed it reassuringly. She must have
been baffled by the things I'd said. Then I took up with Skagarach
where I'd left off on the beach. "All this hand-to-hand combat rot," I
said. "Where will that get you—us? Dealing with rockets and space
stations, and doing it with submachine guns, after all. It's race
suicide."
"You're thinking on the wrong tack. We are the primeval beings, yes;
and we're facing, and prepared to use, the farthest reaches of
scientific achievement. But look, Ray: if an intelligent caveman came
among a group of moderns, and saw a gun lying there, and was
taught how to use it, which would be the bright thing to do—snatch
it and use it on them, or wade in with his fists?
"We intend to blot out Homo sapiens and we shall do it. But not with
stone clubs, not with revolvers. No, we'll lay hands tonight on man's
greatest weapon, the only weapon which can be turned against the
whole globe: the space station. You object to our primitive methods.
You're not thinking deeply enough. The pure science of the station,
the rockets and the VTO tugs buffaloes you. You can't see a horde of
men with handguns and grenades capturing those awesome
devices."
"That's right, I can't."
"Why not? There is no more problem here than there is attacking a
bank vault, or an outpost of soldiers. So far as the government
knows, there is no secret army within its borders! They haven't the
faintest notion that we exist, an army of manlike non-men.
"It's the broad conception that stumps you, Ray. So picture each
operation by itself. The storming of the rocket ports—by quite
adequate troops of ours, well-armed and savage. Then the towing of
the rockets, by VTO tugs, to Pompey Island—this done by
technicians and scientists who are not men, but Neanderthals. Then
the locking of the space station to the rockets, and the takeoff for
outer space. Sixty of us in these boats, plus twenty waiting with
other musters at the rocket stations will man that moon. From attack
on Pompey to blast-off from Terra should take from one to three
hours."
"You are insane," said Nessa in a shocked voice.
"No," said Skagarach seriously, "we are sane. But we have fought for
the existence of our race through too many thousands of years, in
too many lands and too many ages, to have mercy now that our
hour is at hand."

I felt as though I'd been dropped into icy water. Skagarach wasn't
kidding. And Bill Cuff was worse than he.
And I had lied to them. I could picture in brain-shattering detail what
they would do to Nessa when they discovered that; for my lie could
blow up their whole scheme. They'd torture her, not me, for they
needed me. I looked at the thought and I couldn't stand it.
I did the most cowardly thing a man could do: I stood up and
betrayed my country, my world, and my entire breed. But I did it
because I knew exactly how much I could take before I cracked—
and while I might withstand their worst for a little while, they would
inevitably do things to Nessa which I could not take.
"Skagarach," I said, "I won't try to fool you. I don't have any dawn
memory. As far as I know I never ranged the fens or slew the
upstart Man in the ages past." I was talking like him. He was an
overwhelming personality. "But I know this: I feel a terrible, inchoate
anger against almost everything. I think it must be what you call the
primal rage. And I also feel a hell of a strong kinship with you, if not
with Bill Cuff. I lied to you. My brother and the space station aren't
on Pompey. They're on Odo Island."
"Well," he said easily, "well, I thought you might have been trying to
outwit us. I thought we might have to flay your woman an inch at a
time to make you talk. But by God, that knock on the cranium fixed
you! Congratulations—and welcome to the Old Companions." He
chuckled. "If you wonder why we trusted your first word to such an
extent, I'll say that we knew the moon was on one of these islands.
We knew that if it wasn't Pompey, it wouldn't be too damned far." He
started forward in the boat. "I'll change our course," he said.
And it was at that moment that I realized something. I had turned
traitor because I couldn't let my wife be maltreated. I had counted
on a feeble plot, a one-in-a-thousand chance that I would be able to
beat the Old Companions; and I'd known quite well that I was only
excusing myself for my craven weakness. Only now did I remember
that the real answer, the only thing a man could have honorably
done, was to kill Nessa and myself immediately—to grip her and leap
into the sea, and dive deep and deeper until we both drowned. Then
my wife would have been safe from them, and I would be dead with
a clean conscience.
But it was much too late to think of that now.
I flung myself down beside her, put my arms around her waist, and
began softly and vividly cursing myself for the prize fool and the
biggest yellow-livered skunk of all time.

CHAPTER VII

We came in toward the shores of Odo Island at ten minutes to


midnight. Bill Cuff and Skagarach and Trutch and I were sitting on
the top of the bow ramp in the lead boat, straining our eyes toward
the small forested bit of earth ahead. Starshine showed us a broken
coastline of rock that didn't look passable, not for a monkey. I said
so. Bill Cuff muttered, "We can make it."
Behind us crowded the cave beasts, each of them equipped with at
least one weapon; some had grenades slung in belts over their
shoulders, others carried .45 revolvers, tommyguns, and rifles.
Skagarach had apologized for not giving me a gun. He said that of
course they couldn't trust me that far yet. I said it was okay. I had
my own automatic and thank God they hadn't discovered it.
Bill Cuff said now, "Tell them to bring the boats in just under the
rocks, Skagarach."
Yellow-hair nodded and then after a moment had passed and he had
not moved, I said, "He isn't doing it," to Bill in a tone of inquiry.
"He's done it. He telepathed it to them."
"Why didn't you?" I asked. Cuff, looking very annoyed, stared away
from me, and Skagarach laughed maliciously. "He can't telepath as
smoothly as I, I'm afraid."
"Then why is he first leader?" I asked, chancing another swat on the
head.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

ebooknice.com

You might also like