Download ebooks file (Ebook) Reflections on Artificial Intelligence for Humanity (Lecture Notes in Computer Science) by Bertrand Braunschweig (editor), Malik Ghallab (editor) ISBN 9783030691271, 3030691276 all chapters
Download ebooks file (Ebook) Reflections on Artificial Intelligence for Humanity (Lecture Notes in Computer Science) by Bertrand Braunschweig (editor), Malik Ghallab (editor) ISBN 9783030691271, 3030691276 all chapters
com
OR CLICK HERE
DOWLOAD EBOOK
https://ebooknice.com/product/vagabond-vol-29-29-37511002
ebooknice.com
https://ebooknice.com/product/boeing-b-29-superfortress-1573658
ebooknice.com
https://ebooknice.com/product/jahrbuch-fur-geschichte-band-29-50958290
ebooknice.com
(Ebook) Harrow County 29 by Cullen Bunn, Tyler Crook
https://ebooknice.com/product/harrow-county-29-53599548
ebooknice.com
ebooknice.com
https://ebooknice.com/product/29-single-and-nigerian-53599780
ebooknice.com
https://ebooknice.com/product/organometallic-chemistry-
volume-29-2440106
ebooknice.com
State-of-the-Art Bertrand Braunschweig
Survey Malik Ghallab (Eds.)
Reflections on
LNAI 12600
Artificial Intelligence
for Humanity
Lecture Notes in Artificial Intelligence 12600
Series Editors
Randy Goebel
University of Alberta, Edmonton, Canada
Yuzuru Tanaka
Hokkaido University, Sapporo, Japan
Wolfgang Wahlster
DFKI and Saarland University, Saarbrücken, Germany
Founding Editor
Jörg Siekmann
DFKI and Saarland University, Saarbrücken, Germany
More information about this subseries at http://www.springer.com/series/1244
Bertrand Braunschweig •
Reflections on
Artificial Intelligence
for Humanity
123
Editors
Bertrand Braunschweig Malik Ghallab
Inria LAAS-CNRS
Le Chesnay, France Toulouse, France
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
1
Issues addressed by the Global Forum on AI for Humanity, Paris, Oct. 28–30, 2019.
Preface vii
The above issues raise many scientific challenges specific to AI, as well as inter-
disciplinary challenges for the sciences and humanities. They must be the topic of
interdisciplinary research, social observatories and experiments, citizen deliberations,
and political choices. They must be the focus of international collaborations and
coordinated global actions.
The “Reflections on AI for Humanity” proposed in this book develop the above
problems and sketch approaches for solving them. They aim at supporting the work of
forthcoming initiatives in the field, in particular of the Global Partnership on Artificial
Intelligence, a multilateral initiative launched in June 2020 by fourteen countries and
the European Union. We hope that they will contribute to building a better and more
responsible AI.
Trustworthy AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti,
Katharina Morik, Stuart Russell, and Karen Yeung
2 What Is AI Today
1
These are, for example, the challenges in image recognition [23], in question answer-
ing [35] and other natural language processing tasks [29], in automated planning
[26], in theorem proving [34], and in logistics and other robotics competitions [33].
2
See [2], an early survey on April 2020 of 140 references.
Reflections on AI for Humanity: Introduction 3
and more AI techniques. Similarly for mining, e.g., to support deep drills explo-
ration or automated open-pit mining. Space applications are among the early
success stories of AI, e.g., [5]. Defense and military applications are a matter
of huge investments, as well as concerns. Precision and green agriculture relies
on a range of sensing, monitoring and planning techniques as well as on versa-
tile robots for weeding and crop management tasks. AI has been adopted very
early in e-commerce for automated pricing, user profiling and (socially dubious)
optimizations. Similarly in finance, e.g., in high frequency trading. Learning and
decision making techniques are extensively used in banking, insurance, and con-
sulting companies. Education institutions are routinely using advanced data and
text management tools (e.g., timetabling, plagiarism detection). Personal tutor-
ing techniques start being deployed.3 Automated translation software and vocal
assistants with speech recognition and synthesis are commonly marketed. This
is also the case for very strong board, card and video games. Motion planning
and automated character animation are successfully used by the film industry.
Several natural language and document processing functions are employed by
the media, law firms and many other businesses. Even graphical and musical
artists experiment with AI synthesis tools for their work.
Key indicators for AI show a tremendous growth over the last two decades
in research, industry and deployments across many countries. For example, the
overall number of peer-reviewed publications has tripled over this period. Fund-
ing has increased at an average annual growth rate of 48%, reaching over $70B
world wide. Out of a recent survey of 2360 large companies, 58% reported adopt-
ing AI in at least one function or business unit [28]. The AI labor demand vastly
exceeds trained applicants and leads to a growing enrollment in AI education,
as well as to incentives for quickly augmenting the AI schooling capacities.4
3
e.g., [27, 31], the two winner systems of the Global Learning XPrize competition in
May 2019.
4
These and other indicators are detailed in the recent AI Index Report [8].
4 B. Braunschweig and M. Ghallab
in research as well as in industrial development; hence there are many more stud-
ies of new techniques than studies of their entailed risks.5
The main issues of AI are how to assess and mitigate the human, social and
environment risks of its ubiquitous deployments in devices and applications, and
how to drive its developments toward social good.
AI is deployed in safety critical applications, such as health, transportation,
network and infrastructure management, surveillance and defense. The corre-
sponding risks in human lives as well as in social and environmental costs are
not sufficiently assessed. They give rise to significant challenges for the verifica-
tion and validation of AI methods.
The individual uses of AI tools entail risks for the security of digital inter-
action, the privacy preserving and confidentiality of personal information. The
insufficient transparency and intelligibility of current techniques imply other
risks for uncritical and inadequate uses.
The social acceptability of a technology is much more demanding than the
market acceptance. Among other things, social acceptability needs to take into
account the long term, including possible impacts on future generations. It has to
worry about social cohesion, employment, resource sharing, inclusion and social
recognition. It needs to integrate the imperatives of human rights, historical,
social, cultural and ethical values of a community. It should consider global
constraints affecting the environment or international relations.
The social risks of AI with respect to these requirements are significant. They
cover a broad spectrum, from biases in decision support systems (e.g., [7,10]), to
fake news, behavior manipulation and debate steering [13]. They include political
risks that can be a threat to democracy [6] and human rights [9], as well as risks
to economy (implicit price cartels [4], instability of high frequency trading [11])
and to employment [1]. AI in enhanced or even autonomous lethal weapons
and military systems threatens peace, it raises strong ethical concerns, e.g., as
expressed in a call to a ban on autonomous weapons [19].
The Partnership on AI. This partnership was created by six companies, Apple,
Amazon, Google, Facebook, IBM, and Microsoft, and announced during the
Future of Artificial Intelligence conference in 2016. It was subsequently extended
into a multi-stakeholder organization which now gathers 100 partners from 13
5
E.g., according to the survey [28] 13% companies adopting AI are taking actions for
mitigating risks.
Reflections on AI for Humanity: Introduction 5
countries [32]. Its objectives are “to study and formulate best practices on AI
technologies, to advance the public’s understanding of AI, and to serve as an
open platform for discussion and engagement about AI and its influences on
people and society”. Since its inception, the Partnership on AI published a few
reports, the last one being a position paper on the undesirable use of a specific
criminal risk assessment tool in the COVID-19 crisis.
The European Commission’s HLEG. The High Level Expert Group on AI of the
European Commission is among the noticeable international efforts on the soci-
etal impact of AI. Initially composed of 52 multi-disciplinary experts, it started
its work in 2018 and published its first report in December of the same year [18].
The report highlights three characteristics that should be met during the lifecy-
cle of an AI system in order to be trustworthy:“it should be lawful, complying
with all applicable laws and regulations; it should be ethical, ensuring adherence
to ethical principles and values; and it should be robust, both from a technical
and social perspective, since, even with good intentions, AI systems can cause
unintentional harm”. Four ethical principles are stressed: human autonomy; pre-
vention of harm; fairness; explainability. The report makes recommendations
for technical and non-technical methods to achieve seven requirements (human
agency and oversight; technical robustness; etc.).
A period of pilot implementations of the guidelines followed this report,
its results have not yet been published. Meanwhile, the European Commission
released a White Paper on AI [42], which refers to the ethics recommendations
of the HLEG.
• for the category “Fairness”, the Beijing AI Principles contains the follow-
ing: “making the system as fair as possible, reducing possible discrimination
and biases, improving its transparency, explainability and predictability, and
making the system more traceable, auditable and accountable”.
• for the category “Privacy”, the Montreal AI Declaration states that “Every
person must be able to exercise extensive control over their personal data,
especially when it comes to its collection, use, and dissemination.”
Reflections on AI for Humanity: Introduction 7
Rebecca Finlay and Hideaki Takeda report in chapter 5 about the delegation
of decisions to machines. Delegating simple daily life or complex professional
decisions to a computerized personal assistant, to a digital twin, can amplify
our capabilities or be a source of alienation. The requirements to circumvent
the latter include in particular intelligible procedures, articulate and explicit
explanations, permanent alignment of the machine’s assessment functions with
our criteria, as well as anticipation of and provision for an effective transfer of
control back to human, when desirable.
In chapter 6 Françoise Fogelman-Soulié, Laurence Devillers and Ricardo
Baeza-Yates address the subject of AI & Human values such as equity, protec-
tion against biases and fairness, with a specific focus on nudging and feedback
loop effects. Automated or computer aided decisions can be unfair, because of
possibly unintended biases in algorithms or in training data. What technical
and operational measures can be needed to ensure that AI systems comply with
essential human values, that their use is socially acceptable, and possibly desir-
able for strengthening social bounds.
Chapter 7, coordinated by Paolo Traverso addresses important core AI sci-
entific and technological challenges: understanding the inner mechanisms of
deep neural networks; optimising the neural networks architectures; moving to
explainable and auditable AI in order to augment trust in these systems; and
attempting to solve the talent bottleneck in modern artificial intelligence by using
automated machine learning. The field of AI is rich of technical and scientific
challenges, as can be seen from the examples given in this chapter.
In chapter 8, Jocelyn Maclure and Stuart Russell consider some of the major
challenges for developing inclusive and equitable education, improving health-
care, advancing scientific knowledge and preserving the planet. They examine
how properly designed AI systems can help address some of the United Nations
SDGs. They discuss the conditions required to bring into play AI for these chal-
lenges. They underline in particular that neither neither pure knowledge-based
approaches nor pure machine learning can solve the global challenges outlined
in the chapter; hybrid approaches are needed.
In chapter 9, Carlo Casonato reflects on legal and constitutional issues raised
by AI. Taking many examples from real-world usage of AI, mainly in justice,
health and medicine, Casonato puts the different viewpoints expressed in the
previous chapters into a new perspective, regarding regulations, democracy,
anthropology and human rights. The chapter ends with a proposal for a set
of new (or renewed) human rights, in order to achieve a balanced and constitu-
tionally oriented framework for specific rights for a human-centered deployment
of AI systems.
The question of ethical charters for AI is discussed in chapter 10 by Lyse
Langlois and Catherine Régis. Looking at the current ethical charters landscape
which has flourished extensively in the last years, the chapter examines the
fundamentals of ethics and discusses their relations with law and regulations. It
concludes with remarks on the appropriateness of GPAI, UN and UNESCO to
take the lead in international regulatory efforts towards globally accepted ethics
charters for AI.
Reflections on AI for Humanity: Introduction 9
specific set of issues, with links to other chapters. To further guide the reader
about the organization of the covered topics, a possible clustering (with overlaps)
of these “Reflections on Artificial Intelligence for Humanity” is the following:
• chapters 7, 13 and 14 are mainly devoted to technological and scientific chal-
lenges with AI and at some developments designed to address them;
• chapters 5, 6, 10, and 11 focus on different ethical issues associated with AI;
• chapters 2, 3, 4, 5, and 6 cover the social impacts of AI at the workplace and
in personal applications;
• chapters 7, 8, 12 and 13 discuss the possible benefits and risks of AI in several
area such as health, justice, justice, education, humanities and social sciences;
• chapters 3, 9, 14, and 15 addresses legal and organizational issues raised by
AI.
References
1. Arntz, M., Gregory, T., Zierahn, U.: The Risk of Automation for Jobs in OECD
Countries. OECD Social, Employment and Migration Working Papers (189)
(2016). https://doi.org/10.1787/5jlz9h56dvq7-en. https://www.oecd-ilibrary.org/
content/paper/5jlz9h56dvq7-en
2. Bullock, J., Luccioni, A., Pham, K.H., Lam, C.S.N., Luengo-Oroz, M.: Mapping
the landscape of artificial intelligence applications against covid-19. arXiv (2020).
https://arxiv.org/abs/2003.11336
3. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial
intelligence: mapping consensus in ethical and rights-based approaches to principles
for AI. Technical report 2020, Berkman Klein Center Research Publication (2020).
https://doi.org/10.2139/ssrn.3518482
4. Gal, M.S.: Illegal pricing algorithms. Commun. ACM 62(1), 18–20 (2019)
5. Muscettola, N., Nayak, P.P., Pell, B., Williams, B.C.: Remote agent: to boldly go
where no AI system has gone before. Artif. Intell. 103, 5–47 (1998)
6. Nemitz, P.: Constitutional democracy and technology in the age of artificial intel-
ligence. Philos. Trans. Roy. Soc. A: Math. Phys. Eng. Sci. 376(2133), 1–14 (2018)
7. O’Neil, C.: Weapons of Math Destruction: How Big Data Increases Inequality and
Threatens Democracy. Crown Random House, New York (2016)
8. Perrault, R., et al.: The AI index 2019 annual report. Technical report, Stanford
University (2019). http://aiindex.org
9. Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., Kim, L.Y.: Artificial Intel-
ligence & Human Rights: Opportunities & Risks. SSRN, September 2018
6
See http://gpai.ai.
12 B. Braunschweig and M. Ghallab
10. Skeem, J.L., Lowenkamp, C.: Risk, Race, & Recidivism: Predictive Bias and Dis-
parate Impact. SSRN (2016)
11. Sornette, D., von der Becke, S.: Crashes and High Frequency Trading. SSRN,
August 2011
12. Zeng, Y., Lu, E., Huangfu, C.: Linking artificial intelligence principles. arXiv
(2018). https://arxiv.org/abs/1812.04814v1
13. Zuboff, S.: The Age of Surveillance Capitalism. PublicAffairs, New York (2019)
14. AI for good foundation. https://ai4good.org/about/
15. AI now institute. https://ainowinstitute.org/
16. Ai4People. http://www.eismd.eu/ai4people/
17. Ethically Aligned Design. https://standards.ieee.org/content/dam/ieee-
standards/standards/web/documents/other/ead v2.pdf
18. EU high level expert group on AI. https://ec.europa.eu/digital-single-market/en/
high-level-expert-group-artificial-intelligence
19. The Future of Life Institute. https://futureoflife.org/open-letter-autonomous-
weapons/?cn-reloaded=1
20. The Global Challenges Foundation. https://globalchallenges.org/about/the-
global-challenges-foundation/
21. Human-Centered AI. http://hai.stanford.edu/
22. Humane AI. http://www.humane-ai.eu/
23. Image Net. http://image-net.org/
24. International Telecommunication Union. https://www.itu.int/dms pub/itu-s/
opb/journal/S-JOURNAL-ICTS.V1I1-2017-1-PDF-E.pdf
25. International Observatory on the Societal Impacts of AI. https://observatoire-ia.
ulaval.ca/
26. International Planning Competition. http://icaps-conference.org/index.php/
Main/Competitions
27. Kitkit School. http://kitkitschool.com/
28. Mckinsey Global Institute. https://www.mckinsey.com/featured-insights/
artificial-intelligence/global-ai-survey-ai-proves-its-worth-but-few-scale-impact
29. NLP Competitions. https://codalab-worksheets.readthedocs.io/en/latest/
Competitions/#list-of-competitions
30. OECD AI Policy Observatory. http://www.oecd.org/going-digital/ai/oecd-
initiatives-on-ai.htm
31. OneTab. https://onebillion.org/
32. Partnership on AI. https://www.partnershiponai.org/research-lander/
33. RoboCup. https://www.robocup.org/
34. SAT Competitions. http://satcompetition.org/
35. SQuAD Explorer. https://rajpurkar.github.io/SQuAD-explorer/
36. UK center for the governance of AI. https://www.fhi.ox.ac.uk/governance-ai-
program/
37. Un Global Pulse. https://www.unglobalpulse.org/
38. Un Sustainable Development Goals. https://sustainabledevelopment.un.org/?
menu=1300
39. UNESCO. https://en.unesco.org/artificial-intelligence
40. Deliberations of the expert group on artificial intelligence at the OECD (2019).
https://www.oecd-ilibrary.org/
41. Preliminary study on the ethics of artificial intelligence (2019). https://unesdoc.
unesco.org/
42. EU white paper on AI (2020). https://ec.europa.eu/info/publications/white-
paper-artificial-intelligence-european-approach-excellence-and-trust en
Trustworthy AI
use of AI is needed to facilitate trust in AI and ensure that all can profit from
the benefits of AI. This can guard against the use of biased data or algorithms,
ensure that automated decisions are justified and explainable, and help maintain
privacy of individuals.
In recent years, we have seen a rise of efforts around the ethical, societal
and legal impact of AI. These are the result of concerted action by national and
transnational governance bodies, including the European Union, the OECD, the
UK, France, Canada and others, but have often also originated from bottom-up
initiatives, launched by practitioners or the scientific community. A few of the
most well-known initiatives are:
the syntactic computational level and can only decide and act within a bounded
set of possibilities defined directly or indirectly (e.g., through machine learning)
by human programmers. It is therefore not possible that machines take ethical
decisions, even if their actions could have ethical consequences. This means that
no decisions implying ethical deliberation with critical consequences should be
delegated to machines.
2.2 Governance
The pioneering work “Learning interpretable models” [54] starts with the saying
of Henry Louis Mencken:
This directly leads us to the problem of understanding with its two faces, the
complexity of what is to be explained, and the human predilection for simple
18 R. Chatila et al.
explanations that fit into what is already known. When applying the saying to
understanding AI systems, we may state that AI systems are not neat and are
based on assumptions and theories that are not plausible in the first instance.
Since we are not interested in wrong assertions, we exclude easy solutions and
take a look at the complexity of AI systems and human understanding.
an analysis process based on its learning from processes9 . Moreover, the sys-
tem creates and selects features using multi-objective optimization [43]. Many
auto modeling approaches are around today [31,35,40]. The self-optimization
of machine learning also applies to the level of implementing the algorithms
on hardware architectures [7,37]. Hence, even if the statistical formula and the
abstract algorithm is well understood by a user, there remains the part of the
actual implementation on a particular computing architecture including all the
optimizations.
Machine learning algorithms themselves are often compositions. In the sim-
plest case, an ensemble of learned models outputs their majority vote. In the
more complex setting of probabilistic graphical models, nodes with some states
are linked to form a graph. The structure of the graph indicates the condi-
tional independence structure of the nodes, given their neighboring nodes. Here,
the design of the nodes and their neighborhoods may involve human knowledge
about the domain which is modeled. This eases the understanding of the model.
The likelihood of a node’s state depends on the states of all the other nodes,
whose likelihood, in turn, are estimated based on observations. Graphical mod-
els estimate a joint probability distribution over all the states of all the nodes.
Understanding this requires statistical reasoning. The inference of the likelihood
of a certain state of a subset of the nodes, i.e. the answer to a question of a
user is a hard problem. There exists a variety of algorithms that approximate
the inference. For a user with statistical knowledge, the explicit uncertainty that
comes together with a model’s answer, helps the reflection about how reliable
the answer is. However, at another level, within the most prominent classes,
variational inference, (loopy) belief propagation, and (Gibb’s) sampling, diverse
algorithms have been developed for specific computing architectures and each
implementation comes along with its own error bounds, memory, energy, and
run-time demands.
Deep learning methods are composed of several functions, organized into lay-
ers. Between the input nodes and the output nodes are several layers of different
types that transform the high-dimensional input step by step into higher-level
features such that in the end a classification can be performed in a better rep-
resentation space with fewer dimensions. Given the observations and their class
membership, learning – or, to be more precise: its optimization procedure –
delivers features and local patterns at the intermediate layers. Sometimes and
especially for pictures that can be interpreted by every user visualizations of the
intermediate local patterns can be interpreted, e.g., the eye areas of faces. Most
often, the intermediate representations learned to do not correspond to high-level
features that human experts use. There are almost infinitely many architectures
that combine different layer types. Setting up the training has high degrees of
freedom, in addition. We know that deep neural networks are capable of learn-
ing every function approximately. However, we do not know whether a particular
network architecture with a particular learning set-up delivers the best model.
It is most likely, that better models exist, but the only way to find them is trial
9
See https://rapidminer.com/blog/.
20 R. Chatila et al.
and error. The theoretical propositions of error bounds and resource demands
are not always available. Explanation approaches work on the network with the
trained weights and learn an explanation on top of it [56]. A well-known tech-
nique is the Layer-wise Relevance Propagation [5]. Understanding the principles
of deep learning and its explanation requires sound knowledge in optimization
and algorithmics. Understanding the explanation itself is easy if pictures are clas-
sified because their parts are interpretable. For more abstract signals, already
the understanding of the explanation requires some training. In sum, the many
development decisions at several levels of abstraction that make up for an AI
system are complex both in themselves and in their interaction.
possibilities, and regarding our purposes, the reasons for the behavior observed
or the decision suggested.
What is an ‘explanation’ has already been investigated already by Aristotle
in his Physics, a treatise dating back in the 4th century BC. Today it is urgent
to give a functional meaning, as an interface between people and the algorithms
that suggest decisions, or that decide directly.
Really useful AI systems for decision support, especially in high-stake domain
such as health, job screening and justice, should enhance the awareness and
the autonomy of the human decision maker, so that the ultimate decision is
more informed, free of bias as much as possible, and ultimately ‘better’ than
the decision that the human decision maker would have made without the AI
system, as well as ‘better’ than the automated decision by the AI system alone.
Decision making is essentially a socio-technical system, where a decision
maker interacts with various sources of information and decision support tools,
whose quality should be assessed in term of the final, aggregated outcome - the
quality of the decision - rather than assessing only the quality of the decision
support tool in isolation (e.g., in terms of its predictive accuracy and precision
as a stand-alone tool). To this purpose, rather than purely predictive tools, we
need tools that explain their predictions in meaningful terms, a property that is
rarely matched by the AI tools available in the market today.
Following the same line of reasoning, the AI predictive tools that do not sat-
isfy the explanation requirement should simply not be adopted, also coherently
with the GDPR’s provisions concerning the ‘right of explanation’ (see Articles
13(2)(f), 14(2)(g), and 15(1)(h), which require data controllers to provide data
subjects with information about ‘the existence of automated decision-making,
including profiling and, at least in those cases, meaningful information about the
logic involved, as well as the significance and the envisaged consequences of such
processing for the data subject.’)
There are different roles played within the decision making pipeline, therefore,
it is important to clarify to whom is the explanation interpretable and which
kind of questions can they ask.
– End users: ‘Am I being treated fairly’ ?, ‘Can I contest the decision’ ?, ‘What
could I do differently to get a positive outcome’ ?
– Engineers and data scientists: ‘Is my system working as designed’ ?
– Regulators: ‘Is it compliant’ ?
4.1 Approaches
The most recent works in literature are discussed in the review [23], organiz-
ing them according to the ontology illustrated in the figure below (Fig. 1). Today
we have encouraging results that allow us to reconstruct individual explanations,
answers to questions such as ‘Why wasn’t I chosen for the place I applied for?
What should I change to overturn the decision’ ?
Fig. 1. Open the Black Box Problems. The first distinction concerns XbD and BBx.
The latter can be further divided between Model Explanation, when the goal of expla-
nation is the whole logic of the dark model, Outcome Explanation, when the goal is to
explain decisions about a particular case, and Model Inspection, when the goal is to
understand general properties of the dark model.
24 R. Chatila et al.
– Open the black-box (BBx): at the state of art for text and images the best
learning methods are based on Deep Neural networks, therefore post-hoc
explanators are needed to be coupled with the black-box, capable of achieving
the required quality standards above.
– Transparency by design of hybrid AI algorithms (XbD): the challenge is
twofold: i) to link learnt data models with a priori knowledge that is explic-
itly represented through a knowledge graph or an ontology. It would allow
to relate the extracted features by deep learning inference with definitions
of objects in a knowledge space. Different kinds of hybrid systems should be
investigated, from loose coupling to tight integration of symbolic and numer-
ical models. ii) To re-think Machine Learning as a joint optimization problem
of both accuracy and explainability.
5 Verification
Verification is typically the process of
providing evidence that something that was believed (some fact or hypoth-
esis or theory) is correct.
This can take many forms within computational systems, with a particularly
important variety being formal verification, which can be characterised as
5.1 Issues
As we turn to AI systems, particularly Autonomous Systems that have key
responsibilities we must be sure that we can trust them to act independently.
26 R. Chatila et al.
5.2 Approaches
In verifying reliability, there are a wide range of techniques, many of which will
provide probabilistic estimates of the reliability of the software [39]. In verifying
beneficiality, there are far fewer methods. Indeed, what verification method we
can use depends on how decisions are made. Beyond the broad definition of
autonomous systems as “systems that make their own decisions without human
intervention” there are a variety of options.
– Automatic: whereby a sequence of prescribed, activities are fixed in advance.
Here, the decisions are made by the original programmer and so we can carry
out formal verification on the (fixed) code. (Note, however, that these systems
show little flexibility.)
– Learning (trained system): whereby a machine learning system is trained
offline from a set of examples.
Here, the decisions are essentially taken by whoever chose the training set.
Formal verification is very difficult (and often impossible) since, even when
we know the training set, we do not know what attributes of the training
set are important (and what bias was in the training set). Hence the most
common verification approach here is testing.
– Learning (adaptive system): whereby the system’s behaviour evolves through
environmental interactions/feedback.
In systems such as this (reinforcement learning, adaptive systems, etc.), the
decisions are effectively taken by the environment. Since we can never fully
describe any real environment, we are left with either testing or approximation
as verification approaches.
– Fully Autonomous: whereby decisions involve an algorithm based on internal
principles/motivations and (beliefs about) the current situation.
Decisions are made by software, not fixed in advance and not directly driven
Trustworthy AI 27
by the system’s environment or training. Here, rather than verifying all the
decisions the system might make (which we do not know), we can verify the
way that the system makes decisions [10]. At any particular moment, will it
always make the best decision given what it knows about the situation?
5.3 Challenges
What is our real worry about autonomous systems? It is not particularly that
we think they are unreliable [55] but that we are concerned about their intent.
What are they trying to do and why are they doing this? It is here that ‘why’
becomes crucial. In complex environments we cannot predict all the decisions
that must be made (and so cannot pre-code all the ‘correct’ decisions) but we can
ensure that, in making its decisions, an autonomous system will carry them out
“in the right way”. Unless we can strongly verify that autonomous systems will
certainly try to make the right decisions, and make them for the right reasons,
then it is irresponsible to deploy such systems in critical environments.
In summary, if we build our system well (exposing reasons for decisions) and pro-
vide strong verification, then we can make significant steps towards trustworthy
autonomy. If we can expose why a system makes its decisions then:
1. we can verify (prove) that it always makes the appropriate decisions [10];
2. we can help convince the public that the system has “good intentions” [36];
3. we can help convince regulators to allow/certify these systems [15]; and so
4. give engineers the confidence to build more autonomous systems.
10
See also chapters 9 and 10 of this book.
28 R. Chatila et al.
Elsewhere I have argued that international human rights standards offer the most
promising set of ethical standards for AI, as several civil society organisations
have suggested, for the following reasons11 .
First, as an international governance framework, human rights law is intended
to establish global standards (‘norms’) and mechanisms of accountability that
specify the way in which individuals are entitled to be treated, of which the UN
Universal Declaration of Human Rights (UNHR) 1948 is the most well-known.
Despite considerable variation between regional and national human rights char-
ters, they are all grounded on a shared commitment to uphold the inherent
human dignity of each and every person, in which each individual is regarded
of equal worth, wherever situated [41]. These shared foundations reflect the sta-
tus of human rights standards as basic moral entitlements of every individual in
virtue of their humanity, whether or not those entitlements are backed by legal
protection [12].
Secondly, a commitment to effective human rights protection is a critical
and indispensable requirement of democratic constitutional orders. Given that
AI systems increasingly configure our collective and individual environments,
entitlements and access to, or exclusion from, opportunities and resources, it is
essential that the protection of human rights, alongside respect for the rule of
law and the protection of democracy, is assured to maintain the character of
political communities as constitutional democracies, in which every individual
is free to pursue his or her own version of the good life as far as this is possible
within a framework of peaceful and stable cooperation framework underpinned
by the rule of law [28].
Thirdly, the well-developed institutional framework through which system-
atic attempts are made to monitor, promote and protect adherence to human
rights norms around the world offers a well-established analytical framework
through which tension and conflict between rights, and between rights and col-
lective interests of considerable importance in democratic societies, are resolved
11
See various reports by civil society organisations concerned with securing the
protection of international human rights norms, e.g., [41, 42]. See also the
Toronto Declaration: Protecting the rights to equality and non-discrimination in
machine learning systems (2018) (Available at https://www.accessnow.org/the-
toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-
machine-learning-systems/); The Montreal Declaration for a Responsible Devel-
opment of Artificial Intelligence: A Participatory Process (2017) (Available at
https://nouvelles.umontreal.ca/en/article/2017/11/03/montreal-declaration-for-
a-responsible-development-of-artificial-intelligence/); Access Now (see https://
www.accessnow.org/tag/artificial-intelligence/ for various reports); Data & Society
(see https://datasociety.net/); IEEE’s report on ethically aligned design for AI
(Available at https://ethicsinaction.ieee.org/) which lists as its first principle that
AI design should not infringe international human rights; and the AI Now Report
(2018) (Available at https://ainowinstitute.org/AI Now 2018 Report.pdf).
Trustworthy AI 29
Much more theoretical and applied research is required to flesh out the details
of our proposed approach, generating multiple lines of inquiry that must be pur-
sued to develop the technical and organisational methods and systems that will
be needed, based on the adaptation of existing engineering and regulatory tech-
niques aimed at ensuring safe system design, re-configuring and extending these
approaches to secure compliance with a much wider and more complex set of
human rights norms. It will require identifying and reconfiguring many aspects of
software engineering (SE) practice to support meaningful human rights evalua-
tion and compliance, complemented by a focused human rights-centred interdis-
ciplinary research and design agenda. To fulfil this vision of human-rights centred
design, deliberation and oversight necessary to secure trustworthy AI, several
serious challenges must first overcome - at the disciplinary level, the organi-
sational level, the industry level, and the policy-making level ‘none of which
will be easily achieved. Furthermore, because human rights are often highly
abstract in nature and lacking sharply delineated boundaries given their capac-
ity to adapt and evolve in response to their dynamic socio-technical context,
there may well be only so much that software and system design and implemen-
tation techniques can achieve in attempting to transpose human rights norms
and commitments into the structure and operation of AI systems in real world
settings. Nor can a human-rights centred approach ensure the protection of all
ethical values adversely implicated by AI, given that human rights norms do not
comprehensively cover all values of societal concern. Rather, our proposal for
the human-rights centred governance of AI systems constitutes only one impor-
tant element in the overall socio-political landscape needed to build a future in
which AI systems are compatible with liberal democratic political communities
in which respect for human rights and the rule of law lie at its bedrock12 . In
other words, human-rights norms provide a critical starting point in our quest to
develop genuinely trustworthy AI, the importance of which is difficult to under-
estimate. As the UN Secretary General High-Level Panel on Digital Cooperation
(2019) has stated:
7 Beneficial AI
12
see also chapter 9 of this book.
32 R. Chatila et al.
As machines, unlike humans, do not come with objectives, those are supplied
exogenously, by us. So we create optimizing machinery, plug in the objectives,
and off it goes.
Trustworthy AI 33
I will call this the standard model for AI. It is instantiated in slightly differ-
ent ways in different subfields of AI. For example, problem-solving and planning
algorithms (depth-first search, A∗ , SATPlan, etc.) aim to find least-cost action
sequences that achieve a logically defined goal; game-playing algorithms max-
imize the probability of winning the game; MDP (Markov Decision Process)
solvers and reinforcement learning algorithms find policies that maximize the
expected discounted sum of rewards; supervised learning algorithms minimize a
loss function. The same basic model holds in control theory (minimizing cost),
operations research (maximizing reward), statistics (minimizing loss), and eco-
nomics (maximizing utility, GDP, or discounted quarterly profit streams).
Unfortunately, the standard model fails when we supply objectives that are
incomplete or incorrect. We have known this for a long time. For example, King
Midas specified his objective—that everything he touch turn to gold—and found
out too late that this included his food, drink, and family members. Many cul-
tures have some variant of the genie who grants three wishes; in these stories, the
third wish is usually to undo the first two wishes. In economics, this is the prob-
lem of externalities, where (for example) a corporation pursuing profit renders
the Earth uninhabitable as a side effect.
Until recently, AI systems operated largely in the laboratory and in toy,
simulated environments. Errors in defining objectives were plentiful [38], some
of them highly amusing, but in all cases researchers could simply reset the system
and try again. Now, however, AI systems operate in the real world, interacting
directly with billions of people. For example, content selection algorithms in
social media determine what a significant fraction of all human beings read
and watch for many hours per day. Initial designs for these algorithms specified
an objective to maximize some measure of click-through or engagement. Fairly
soon, the social media companies realized the corrosive effects of maximizing
such objectives, but fixing the problem has turned out to be very difficult.
Content selection algorithms in social media are very simple learning algo-
rithms that typically represent content as feature vectors and humans as
sequences of clicks and non-clicks. Clearly, more sophisticated and capable algo-
rithms could wreak far more havoc. This is an instance of a general principle [48]:
with misspecified objectives, the better the AI, the worse the outcome. An AI
system pursuing an incorrect objective is by definition in conflict with humanity.
The mistake in the standard model is the assumption that we humans can supply
a complete and correct definition of our true preferences to the machine. From
the machine’s point of view, this amounts to the assumption that the objective
it is pursuing is exactly the right one. We can avoid this problem by defining the
goals of AI in a slightly different way [53]:
Machines are beneficial to the extent that their actions can be expected to
achieve our objectives.
Other documents randomly have
different content
The Project Gutenberg eBook of Vengeance
From the Past
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Illustrator: W. E. Terry
Language: English
CHAPTER II
CHAPTER III
CHAPTER IV
I caught the seven a.m. train for Boston. I hadn't slept or even lain
down all night. The sole conclusion I'd come to was that I didn't
dare ask for help in this job, not yet at any rate. I would be
jeopardizing Nessa's life.
I had thought of the police. But they'd had two years to find Bill Cuff
and failed. One hint that they were looking for him, and he with his
crazy Old Companions would stamp out my wife's life as off-
handedly as I'd squash a beetle. I'm a law-abiding citizen and I
respect the enforcers of the law; but this was a special case. I'd
done my civic duty other times, but now I was on a one-man
crusade. I had to save Nessa. If I could chop down Cuff, well and
good. But Nessa came first.
As the train shot along through countryside scattered with dying
autumn foliage, swept with intermittent rains, I thought of my
brother Howard and his work. On Odo Island he and six other top-
grade brains were creating a space station for the United States—a
man-made moon, the first jump to the stars—and equally important,
a lookout post from which we could keep tabs on all of Earth.
A lot of the heavy forest on Odo was false; it couldn't be detected
from the air, and the formation of the island prevented its being seen
from the sea, but plenty of that green was only a big canopy
shielding the small air field on which a great wheel-shaped space
station had already been put together. 237 feet across, it would in
the near future be carried off the earth, towed by the enormous
three-stage rockets which were already waiting in hiding along the
eastern coast of the States. One thousand miles up—one thousand
plus—it would then become a satellite of Terra.
Odo was guarded by its coast, a real rock-bound wreckers' paradise,
and by six brace of anti-aircraft guns. There were forty Marines
based there, six scientists, and eighty-odd workmen. Everyone had
been screened back to his grandparents, and evidently none of the
Old Companions had been able to worm in, since Bill Cuff hadn't
known where the artificial moon was being constructed.
Pompey Island was about twelve miles to the south of Odo. There
wasn't anything on it but trees and the only chuckle I could muster
during that whole train ride was at the picture of Bill Cuff at the
head of a hundred Neanderthal men (all clad in mammoth skins and
carrying stone-headed clubs) landing on Pompey and roaring over it
in search of my brother and his metal moon.
I had no idea why I was to meet Cuff in Boston. For all I knew,
Nessa might be held in New York, in Alabama, or in Evanston,
Illinois. But I had to go to Boston, because I had no other lead
whatever. I couldn't form plans because I was so totally in the dark.
I just had to do what I could. And I had to be ready to think like
lightning when I did meet Cuff and find out what was happening.
Just as we drew into the station, I used an old writer's trick: I
swallowed a couple of dexedrine tablets so that for a few hours my
fatigue would lie down and I'd have a kind of false vigor of intellect
and muscles. I'd be mighty tired by morning, but for now I'd be at
peak. I got off and took a taxi to a hotel near School Street. I
bathed and shaved and checked my automatic and the extra clips in
my jacket; then I ate an early supper and walked over to City Hall.
On the nose of five o'clock a gray car drew up and one of the men in
the back seat rolled down the window and gestured me over. I got in
beside the driver and we moved away into the traffic. Nobody said
anything until we had left Boston behind and were almost into Lynn.
Then Bill Cuff said from the back seat, "You seem pretty calm, Ray,"
and laughed. "That's the blood," he said admiringly. "That's the dark
blood. A man would be fizzing and twitching and babbling his head
off."
I had determined not to think any further than the rescue of Nessa.
I wasn't going to bog down in speculations as to my humanness, or
the truth of this whole theory of Cuff's; but even so, the chills
chased over me when he said man like that. Wasn't I altogether
human? Would I, too, eventually experience the dawn brain's
awakening, the revulsion against humanity, the reversion to pre-
historic emotion?
I said as casually as possible, "Seems you don't trust the dark blood
any further than you could spit it, Bill."
"Not in you, not yet. I'm sorry about Nessa. She was a sensible
precaution. You wouldn't think much of my wits if I hadn't taken
her."
"Where is she?" I held my breath tensely.
"You'll see her at the end of the trip."
"And when's that?" My breathing relaxed a trifle.
"Few hours."
"He wants to know too much," said the driver. I looked over at him.
He was a thick, short, shallow-templed fellow, gray of eye and
straight of thin-lipped mouth. He had ears like a baby elephant's
long unkempt hair draping over them. I could smell his breath three
feet away.
"Shut up, Trutch," said Bill Cuff impatiently. "He's my cousin."
"But has he the dawn brain? Are you sure he—"
"Shut up. Just shut up," said Bill, and his voice was like that of a
maniac holding himself in with a terrible effort.
"I don't think you ought to tell him things like—" persisted Trutch,
and then Bill Cuff had leaned forward and given him a hell of a
wallop on the side of the head with his open palm. The driver jerked
forward and grunted and then he was quiet, as the car lurched and
recovered. We were doing fifty. Cuff said, "Shut up! When I tell you
that, do it!"
There were two other men in the back. One of them growled, "Easy,
Bill. We live by the primal rage, but you must control it."
I turned and put my arm across the back of the seat and looked at
the man who had spoken. He was another of the short and stocky
breed. His eyes were snapping gray gems in a face as tan as a boot.
He had more hair piled on top of his long skull than I ever saw on
anyone but a movie actor: it was bright yellow, not gold but sulphur
yellow, and slicked with oil. His features were broad and at the same
time vulpine, the thickened muzzle of a fox. I had meant only to
glance at each of them in turn, but my gaze was held by this Old
Companion. His expression was good-humored and yet he radiated
evil, an old, old wickedness commingled with piercing intelligence.
When at last I managed to tear my eyes from him, I knew that this
was the worst of my enemies. I could not have defended that by
logic, but neither could I have been argued out of it. I would have
faced five giant Bill Cuffs rather than this yellow-haired creature.
"My name is Skagarach," he said to me, bringing my eyes back to
him involuntarily. "I am third leader in our muster of the Old
Companions. You have met the second leader, Old One. That is the
truth of our folk. In time, in generations, we shall all look so, and the
effete refinements of Homo sapiens will be gone." He glanced at Bill
Cuff, who towered beside him, watching me. "Bill is first leader. In
two years he has become so. He killed nineteen of us to gain that
leadership." Skagarach smiled, cunningly and drily. I gathered that
he was not fond of my cousin. And that was my first piece of real
hope.
"The man at the wheel," he went on, "is called Trutch. As far as I
know he has no other name. The fourth is Vance." This last was a
young fellow, about as wide as he was high, with the usual gray
eyes.
"Are the eyes a distinguishing characteristic?" I asked.
"Some ninety per cent of us have them. You do yourself. But every
gray-eyed man is not Homo-Neanderthal by any means."
"How do you—we—tell each other apart from men?"
"Actions: Cuff killed insanely, from a human viewpoint, that is, and
then answered our telepathic call. Occasionally we have only actions,
not mental communication, to judge by, and then we find the one
who has gone berserk and test him. Sometimes the dawn brain
returns to an Old Companion without the gift of telepathy."
"Suppose I were to say that I remembered being a caveman. How
would you test that?"
Skagarach and Bill Cuff grinned. The other two seemed without
humor. "Go ahead, tell us what you remember," said my cousin.
"I don't—but suppose I say, I remember hunting a mammoth...."
"You would be lying. You'd recall other things—mating with human
women, being stalked to your death, fighting the upstart Man. You
would have flashes of other centuries, of being named werewolf,
vampire, hobgoblin, ogre, bugbear and demon. Always the violence,
the antagonism to man, the slaying and being slain. Not the
common everyday life, but the high and savage points."
"I see. You give me a swell opportunity to lie to you," I told him
candidly. I had nothing to lose, for I wouldn't bother lying. I had a
hunch it wouldn't do me any good in this swift job I had to do.
"There are other checks on you," said Skagarach. He leaned forward
suddenly. "Truthfully—do you have stirrings when I say those things?
Does your brain murmur the least surprise of faintest recognition?"
"Truthfully," I said, "no."
"Never mind," said he, sitting back again. "It took me 17 years to
develop the memory fully. Others are given it by a knock on the
head, or even, as Cuff here, gain it full-blown in a few days with no
stimulus from outside. You be patient, Ray. It will come."
And when it does, if it does, I thought, I hope I have the strength to
kill myself before I stop being a man and turn into one of these pre-
historic horrors!
Then I remembered that they claimed telepathic powers. I glanced
from one to another. Either my sudden thought hadn't reached
them, or they hadn't minded its implications. I said tentatively, "Can
you read the thoughts of other men?"
"Men, not other men," said Trutch viciously.
"Yes," said Skagarach.
Now I had spent a good many years around actors, and damned
good ones at that. This Skagarach was an actor from the word go,
but I believed that I was a better one. So I said carelessly, "Can you
tell what I'm thinking?" and allowed my face to assume the tiniest
lines of worry, the smallest indications of fear possible to the facial
muscles. Skagarach said immediately, "You're fretting over your
wife."
It was a good guess. He knew his book of reactions and signs inside
and out. The only trouble was that I had at that moment been
concentrating intently on a chocolate milk shake and a cheeseburger.
I had even been saying the words over in my mind. So I knew that
he had been trying to convince me of the truth of a lie, and that was
another flake of hope for me.
It was a good thing for me that I had those few minute hopes. They
were all I had.
CHAPTER V
In the late dusk of evening the car pulled off the road and rattled
over a field full of boulders and stopped at the top of a high cliff
overlooking the sea. We all got out and stretched our cramped legs.
Bill Cuff walked along the edge of the foreland until he came to a
trace of path. He called to us and we followed him down the nearly-
sheer face of the promontory, myself trying not to look at the dark
foam spattered sea so far beneath our feet.
At the base of the promontory was a beach. It had looked tiny from
above; I found that it was large, for the ocean had long ago
hollowed out a great cavelike place in the rock, and the beach ran
back under the land for several hundred feet. There were dim blue
searchlights set up at intervals, which would not have been seen
from any distance; no ship would come closer than a mile to the
coast here, and so the presence of Old Companions in the cavern
would be kept secret.
Old Companions....
Great God! What a horde swarmed in that hidden hole, across that
rock-canopied beach! There were about two hundred of them. The
majority were duplicates, in breadth of frame and depth of chest, of
Trutch and Vance. The faces were handsome or ugly, grotesque or
plain, yet all held the concentrated savagery of my four escorts.
Many had arms longer than normal. Some were so deformed that
their gait as they crossed the sand on various errands was almost
that of an ape that swings along on its knuckles. Again, several were
tall and personable, like Bill Cuff.
They were all dressed darkly, in gray broadcloth or black wool
jackets, crepe-soled shoes, no ties and no hats evident. Some of
them were carrying things—submachine guns, handguns, even hand
grenades—from broken crates to the six big boats that lined the
water's edge. Others were giving orders in voices that were almost
without exception gruff and barking. And everywhere I looked I
caught the stare of gray eyes: eyes that took the blue glow of the
searchlights and threw it back condensed and changed, so that from
many dark faces there gleamed at me thin ovals of orange and
crimson and green luminescence.
Now I knew for sure that the tale of the recrudescent apemen was
no fable. Now the focused animal hatred of this pack washed over
me like an unclean sea-wave full of crawling horrors and I realized
fully and beyond a doubt that Bill Cuff's story was true, and that
here in this cavern might well be the start of the finish of the human
race.
"Where's Nessa?" I asked Skagarach. I spoke to him rather than to
my cousin because I had a plan and this could well be the start of it.
"She's back there, I suppose," he said, gesturing to the rear of the
beach. "First come and see the boats." He led me toward the
dockless rim of the sea, and Bill Cuff came after us, glowering at
him. I'd presumed he would hate any assumption of authority on
Skagarach's part. The thing they called the primal rage bubbled near
the surface in Bill Cuff.
The boats were very like LCPs, with big bow ports closed by movable
ramps. Skagarach said, "Yes, very like LCPs," which of course was
not mind-reading, but intelligent guessing of my first thought. "We
ground them on the beach, then they can be backed off easily,
because of their specially designed propellors and rudders. The
power comes from a reactor operating with thermal neutrons, and
late refinements have made it almost wholly silent. This is the
perfect transportation for us."
"To Pompey Island, naturally," I said.
"Naturally," said Bill Cuff in a surly tone. "We're going to pay Howard
a visit."
"But what good will that do?"
"Don't be a burbling, maundering, congenital idiot, Ray," said Bill
irritably. "That space station is the answer for us. With it we'll
command the world."
"But how will you get it into the sky?"
"The same way the men were going to do it. Tow it with three stage
rockets." He relaxed his expression of potential murder, and gripped
me by the shoulder. His hand was like a bear trap. "There are
musters of the Old Companions lying in wait near every rocket
station on the seaboard. As soon as we've secured possession of the
space station, they'll know it; and within fifteen minutes the rockets
will be on the way to Pompey."
"Oh, wait a minute," I said. I was consumed with impatience to see
Nessa, but the sheer incredibility of this plot had to be coped with
now. These men were stark crazy.... "If I dared to write up a yarn in
which three-stage rockets were flown to an island and from there
into the sky with a 237-foot-broad space station, my publisher would
slit my throat with a rolled-up contract! Vampires are easier to
believe than a wacked thing like that."
"Ray," said Bill Cuff, and suddenly from the growl in his voice I
realized that I had been taking liberties with a savage cave-brute,
"Ray, do we seem like fumblers to you?"
"No," I said.
"How do you think the men were going to do it?"
"I don't know, but I presumed they'd dismantle the station, after
testing it, and tow it in parts into space, where they'd reassemble it."
"Dead wrong. They were going to carry it to the thousand-mile mark
by three-stage rockets, yes; but as a whole, not in parts."
"I didn't think it could be done."
"It can with the rockets they have. There've been improvements
since you read about rocketry last, Ray." Cuff looked superior. As if
he'd had something to do with the improvements, instead of
squatting somewhere in a swamp. "And that isn't all. Those rockets
are going to be towed themselves—from their bases to the site of
the man-made moon—by smaller vehicles built on the principles of
the VTO planes."
VTO—Vertical Take Off. Yes, it was remotely conceivable....
"But all this thud-and-blunder business," I protested, turning to
Skagarach. "You're dealing with the highest product of man. And you
figure to take it over by a series of ambushes, wild attacks in the
night, and in general the heavy hand of the apeman. It's straight out
of a nut hatch."
Then Bill Cuff hit me. I saw the swing coming, and the trunklike arm
sweeping round and up with a fist like a boulder on the end of it,
and I started to duck, and then the mountain collapsed on my skull
and the blue lights went out, wham!
CHAPTER VI
I felt as though I'd been dropped into icy water. Skagarach wasn't
kidding. And Bill Cuff was worse than he.
And I had lied to them. I could picture in brain-shattering detail what
they would do to Nessa when they discovered that; for my lie could
blow up their whole scheme. They'd torture her, not me, for they
needed me. I looked at the thought and I couldn't stand it.
I did the most cowardly thing a man could do: I stood up and
betrayed my country, my world, and my entire breed. But I did it
because I knew exactly how much I could take before I cracked—
and while I might withstand their worst for a little while, they would
inevitably do things to Nessa which I could not take.
"Skagarach," I said, "I won't try to fool you. I don't have any dawn
memory. As far as I know I never ranged the fens or slew the
upstart Man in the ages past." I was talking like him. He was an
overwhelming personality. "But I know this: I feel a terrible, inchoate
anger against almost everything. I think it must be what you call the
primal rage. And I also feel a hell of a strong kinship with you, if not
with Bill Cuff. I lied to you. My brother and the space station aren't
on Pompey. They're on Odo Island."
"Well," he said easily, "well, I thought you might have been trying to
outwit us. I thought we might have to flay your woman an inch at a
time to make you talk. But by God, that knock on the cranium fixed
you! Congratulations—and welcome to the Old Companions." He
chuckled. "If you wonder why we trusted your first word to such an
extent, I'll say that we knew the moon was on one of these islands.
We knew that if it wasn't Pompey, it wouldn't be too damned far." He
started forward in the boat. "I'll change our course," he said.
And it was at that moment that I realized something. I had turned
traitor because I couldn't let my wife be maltreated. I had counted
on a feeble plot, a one-in-a-thousand chance that I would be able to
beat the Old Companions; and I'd known quite well that I was only
excusing myself for my craven weakness. Only now did I remember
that the real answer, the only thing a man could have honorably
done, was to kill Nessa and myself immediately—to grip her and leap
into the sea, and dive deep and deeper until we both drowned. Then
my wife would have been safe from them, and I would be dead with
a clean conscience.
But it was much too late to think of that now.
I flung myself down beside her, put my arms around her waist, and
began softly and vividly cursing myself for the prize fool and the
biggest yellow-livered skunk of all time.
CHAPTER VII
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebooknice.com