Ethics for Robots How to Design a Moral Algorithm Derek Leben pdf download
Ethics for Robots How to Design a Moral Algorithm Derek Leben pdf download
https://textbookfull.com/product/ethics-for-robots-how-to-design-
a-moral-algorithm-derek-leben/
https://textbookfull.com/product/moral-choices-an-introduction-
to-ethics-rae/
https://textbookfull.com/product/caring-a-relational-approach-to-
ethics-and-moral-education-2nd-edition-nel-noddings/
https://textbookfull.com/product/the-ethical-algorithm-the-
science-of-socially-aware-algorithm-design-6th-edition-kearns/
https://textbookfull.com/product/the-moral-of-the-story-an-
introduction-to-ethics-nina-rosenstand/
Algorithm Design with Haskell Richard S. Bird
https://textbookfull.com/product/algorithm-design-with-haskell-
richard-s-bird/
https://textbookfull.com/product/humans-and-robots-ethics-agency-
and-anthropomorphism-sven-nyholm/
https://textbookfull.com/product/algorithm-design-practice-for-
collegiate-programming-contests-and-education-1st-edition-
yonghui-wu/
https://textbookfull.com/product/beyond-the-algorithm-ai-
security-privacy-and-ethics-1st-edition-santos/
https://textbookfull.com/product/soft-robots-for-healthcare-
applications-design-modeling-and-control-1st-edition-shane-xie/
ETHICS FOR ROBOTS
Ethics for Robots describes and defends a method for designing and evaluating
ethics algorithms for autonomous machines, such as self-driving cars and search
and rescue drones. Derek Leben argues that such algorithms should be evaluated by
how effectively they accomplish the problem of cooperation among self-interested
organisms, and therefore, rather than simulating the psychological systems that have
evolved to solve this problem, engineers should be tackling the problem itself, tak-
ing relevant lessons from our moral psychology.
Leben draws on the moral theory of John Rawls, arguing that normative moral
theories are attempts to develop optimal solutions to the problem of cooperation. He
claims that Rawlsian Contractarianism leads to the ‘Maximin’ principle – the action
that maximizes the minimum value – and that the Maximin principle is the most
effective solution to the problem of cooperation. He contrasts the Maximin principle
with other principles and shows how they can often produce non-cooperative results.
Using real-world examples – such as an autonomous vehicle facing a situation
where every action results in harm, home care machines, and autonomous weap-
ons systems – Leben contrasts Rawlsian algorithms with alternatives derived from
utilitarianism and natural rights libertarianism.
Including chapter summaries and a glossary of technical terms, Ethics for Robots
is essential reading for philosophers, engineers, computer scientists, and cognitive
scientists working on the problem of ethics for autonomous systems.
Derek Leben
First published 2019
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
711 Third Avenue, New York, NY 10017
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2019 Derek Leben
The right of Derek Leben to be identified as author of this work has been
asserted by him in accordance with sections 77 and 78 of the Copyright,
Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced
or utilised in any form or by any electronic, mechanical, or other means,
now known or hereafter invented, including photocopying and recording,
or in any information storage or retrieval system, without permission in
writing from the publishers.
Trademark notice: Product or corporate names may be trademarks or
registered trademarks, and are used only for identification and explanation
without intent to infringe.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
Names: Leben, Derek, author.
Title: Ethics for robots: how to design a moral algorithm / Derek Leben.
Description: Abingdon, Oxon; New York, NY: Routledge, 2018. |
Includes bibliographical references and index.
Identifiers: LCCN 2018007376 | ISBN 9781138716155 (hbk: alk. paper) |
ISBN 9781138716179 (pbk: alk. paper) | ISBN 9781315197128 (ebk)
Subjects: LCSH: Robotics–Moral and ethical aspects. |
Robotics–Safety measures.
Classification: LCC TJ211.28 .L43 2018 | DDC 174/.9629892–dc23
LC record available at https://lccn.loc.gov/2018007376
ISBN: 978-1-138-71615-5 (hbk)
ISBN: 978-1-138-71617-9 (pbk)
ISBN: 978-1-315-19712-8 (ebk)
Typeset in Bembo
by Deanta Global Publishing Services, Chennai, India
Dedicated to Sean and Ethan
CONTENTS
Acknowledgments viii
Introduction 1
1 Moral psychology 7
2 Cooperation problems 25
3 Theories 42
4 Contractarianism 59
5 Ethics engines 76
6 Avoiding collisions 97
Conclusions 146
Glossary 151
Index 155
ACKNOWLEDGMENTS
Thanks to David Danks, Jeff Maynes, and Michael Cox for helpful feedback.
And thanks to the 61B café for being a good place to write a book.
INTRODUCTION
The word robot was coined by the Czech playwright and journalist Karel Capek
in his 1920 play, Rossum’s Universal Robots. The play is about a company that cre-
ates biological human workers that don’t have a soul, so they can take over all the
jobs that humans usually do. If this idea seemed far-fetched in the 1920s, it doesn’t
seem that way today. Machines are rapidly taking over tasks that were previously
performed by humans in every domain of our society, including agriculture,
factory production, transportation, medicine, retail sales, finance, education, and
even warfare. Most of these machines have simple programs that allow them to
automatically perform a single repetitive task like welding car doors, scanning
products, or vacuuming the f loor. Gradually, machines are beginning to also
take over tasks which involve weighing several different options to arrive at a
decision, like driving, diagnosing diseases, and responding to threats. Let’s call
these kinds of tasks complex. For our purposes, a robot is any physically embodied
machine that can perform complex tasks without any direct human intervention.
According to this definition, a robot doesn’t necessarily have to look like a
human. For instance, I’ll consider driverless cars and certain kinds of missile
systems to be robots. The key feature of a robot is that it is autonomous, which
means making decisions based on principles or reasoning without direct human
intervention. Autonomy has a special meaning in moral philosophy; it’s not just
being able to act in response to input (this might be called automatic), but instead,
being able to think and make decisions in a responsible way. This requires a
minimal kind of artificial intelligence, but I will consider AI to be a broader
class of systems that are not necessarily embodied and more general in the scope
of their abilities. The robots that we’re most interested in are machines that
operate dangerous vehicles or equipment, perform medical services, and provide
security. These machines will be making decisions that could result in harm to
others, which is why we need a framework for designing ethical robots.
2 Introduction
One early proposal for a robot ethics originated in the science-fiction stories
of Isaac Asimov. Asimov’s three laws of robotics are:
1. A robot may not harm a human being or, through inaction, allow a human
being to come to harm.
2. A robot must obey orders given it by human beings except where such
orders would conf lict with the First Law.
3. A robot must protect its own existence, as long as such protection does not
conf lict with the First or Second Law.
These principles seem appealing at first, and they work well enough as a rough
guideline for most normal situations. It’s no coincidence that the first rule looks
like the “golden rule” that’s been repeated throughout cultural and religious
traditions from Confucianism to Christianity.
Even though “don’t cause or allow harm” is good as a rough guideline for
behavior, it won’t work as a detailed rule for decision-making. One major prob-
lem is that this law is useless until we define what counts as harm. Is it harming
someone to insult them, lie to them, or trespass on their property? What about
actions that violate a person’s consent or dignity? Some actions are only likely to
cause harm, but what’s the threshold for likely harm? Every action could possibly
lead to harm, so failing to specify this threshold will leave robots paralyzed with
fear, unable to perform even the most basic of tasks.
Another problem with a rule like “don’t cause or allow harm” is that it immedi-
ately breaks down once we venture to situations where every action leads to harm
for a human being, even the action of doing nothing at all. These cases are called
moral dilemmas. Although moral dilemmas aren’t common, they do happen, and
they can be disturbingly frequent in fields like medicine and warfare. Doctors
must sometimes decide between respecting a patient’s wishes and doing what’s
best for her health. Soldiers must sometimes decide between killing defenseless
enemies and allowing civilians to die. Moral dilemmas are cases where harms to
one person or group are inevitable, so an agent must decide which harm is worse.
What we need is a more specific way to define and weigh harms.
Asimov’s laws might be called a “top-down” approach to programming ethi-
cal robots, where we use a set of rigid rules to constrain and guide behavior.
An alternative approach, which is sometimes called “bottom-up,” uses machine
learning techniques to change a robot’s behavior in response to positive or nega-
tive feedback. It’s true that some of the most impressive examples of machines
performing complex human tasks in the past decade have used this kind of rein-
forcement learning within a multilayered network of f lexible artificial neurons.
For instance, in 2010, the Stanford computer scientist Fei-Fei Li began an annual
competition where competitors would each submit their object recognition
algorithms to be tested against the most massive database of images in the world,
called ImageNet. The algorithms were compared based on performance on tasks
like recognition and classification. The human error rate on these tasks is about
Introduction 3
5–6 percent. In 2010, the winning algorithm had an error rate of 28 percent. In
2011, the error rate was 25 percent. Then, in 2012, the introduction of “deep
learning” techniques produced a winning algorithm with an error rate of just 15
percent. Since then, the winner has improved its accuracy every year, with the
2015 algorithm having an error rate of 3.5 percent, which is better than human
judgment. If machine learning can simulate human performance at recognizing
images, using language, and driving vehicles, why not moral judgments?
The challenge for a machine learning approach to designing robot ethics is
that choices must be made about what kind of information is used as positive or
negative feedback. If we are using human judgments to model machine judg-
ments, then robots will inevitably incorporate the biases and inconsistencies
in our own psychology: preference for people who are familiar or genetically
related, ignoring the effects of our actions on people who are very distant, and
relying on false beliefs about what kinds of actions are harmful. A considerable
number of human beings over the course of history have been raised to approve
of horrible things like genocide, rape, slavery, torture, and child abuse, to name
just a few. Even if we take historical exemplars like Aristotle as our training set,
a well-respected model citizen in Aristotle’s homeland of fourth-century (bc)
Macedonia would probably be a slave-holding pedophile. My point is that we
can’t simply point a machine learning network at human behavior and shout:
“Learn!” Instead, machine learning approaches will need to make important
theoretical assumptions about what kinds of data are morally important.
The aim of this book is to provide a general theoretical framework for design-
ing moral algorithms, whether they be “top-down” or “bottom-up.” Many of
the engineers and scientists working on this problem don’t have training in eth-
ics, and don’t seem to think that they need any. However, it’s impossible to
design an ethics procedure for machines without making substantial theoretical
assumptions about how we solve moral problems. Perhaps the only way to objec-
tively solve moral problems is by understanding the function of morality. Armed
with a proper understanding of the practical function of morality, we can turn
to the engineering task of designing an artificial system that performs this same
function just as well as humans, if not better.
How can a machine be better at making moral decisions than a human being?
In the twenty-first century, most of us have no problem acknowledging that
computers can make better decisions than an average human when it comes
to games or calculations, but how could a machine ever surpass us in some-
thing so fundamentally human as moral decisions? This response reveals an impor-
tant assumption: that morality is essentially a product of our own minds, and
somehow limited to human beings. In philosophy, the term for this position is
“anti-realism,” as opposed to “realism.” The choice between realism or anti-
realism turns out to be the most important initial assumption in any discussion
about ethics.
If moral realism is false, and there are no objective mind-independent answers
to moral questions, then ethics is about a set of psychological responses and
4 Introduction
invent. No rule of general morality can show you what you ought to do: no signs
are vouchsafed in this world.” I imagine that the student’s response was a sar-
castic: “Great, thanks.” There are many moral theories that make vague claims
about one being a virtuous person and expressing care for other people, but these
theories don’t provide any practical guidelines for actions. Thinking about how
we would program a machine to be “virtuous” or “caring” illustrates how useless
these moral theories can be. It forces us to be specific in ways that we’ve never
been forced to be, and to do the hard work needed to produce a real decision-
procedure, not just for machines but for ourselves.
The first half of this book will survey approaches to modeling ethics algo-
rithms based on (1) universal features of human moral psychology, (2) successful
strategies in cooperation games, and (3) historically inf luential moral theories.
All of these approaches are promising, but what is needed is an overarching
theoretical framework tying them together. If we view them as linked together
by functionality, it will enable us to take the parts of each approach that work
and ignore the parts that are irrelevant. Specifically, I’ll argue that our moral
intuitions are the product of a psychological network that adapted in response to
the problem of enforcing cooperative behavior among self-interested organisms.
Moral theories are (sometimes unconscious) attempts to clarify and generalize
these intuitive judgments. Once we view both moral intuitions and theories as
goal-directed, it’s possible to objectively evaluate which features are most effec-
tive at accomplishing this goal. As the most effective solution to cooperation
problems, Contractarianism contains the universal features of our moral psy-
chology and extends them consistently to non-cooperative contexts like moral
dilemmas. It provides a detailed way of determining which objects are valuable,
what kinds of actions are harmful, and the importance of concepts like rights and
consent, as well as providing a general rule called the Maximin principle that can
produce a unique decision in even the most difficult of moral dilemmas. If you
aren’t convinced by my arguments that ethics is connected to cooperation prob-
lems, the rest of the book is still valuable: you can just think of it as “cooperation
for robots” instead.
The second half of the book will examine how Contractarianism can be
turned into a program for autonomous machines. Using chess engines as a
model, I describe how an ethics engine based on Contractarianism would
operate, what its algorithms might look like, and how this procedure applies to
decisions made by robots in various domains. The domains include transpor-
tation, saving lives, and keeping the peace. There are many important ques-
tions about robot ethics that aren’t discussed here. These questions include:
“Under what conditions is a robot morally responsible for its actions?” “Who
do we hold responsible when a robot misbehaves?” “When, if ever, does a
robot become a person with rights?” “What are the social, economic, and legal
implications of allowing robots to take over more of our decision-making?”
Instead, the focus of this book is: “If we are going to build autonomous robots
in domains like transportation, medicine, and war, here are the moral principles
6 Introduction
that should constrain their decisions.” This leaves open whether autonomous
robots can ever be genuinely responsible, worthy of rights, or even a good idea
to build in the first place. My suspicion is that full automation is a good idea
in domains like transportation, but a bad idea in domains like warfare. I am
also generally optimistic that increasing the presence of autonomous machines
into our society and economy will produce beneficial results, providing greater
wealth, leisure, and security for even the worst-off members of the population.
However, I have been wrong before.
Designing ethical machines is an interdisciplinary project, and this book will
draw on work in math, biology, economics, philosophy, computer science, and
cognitive science. I will start by apologizing to each of these fields, because I’m
going to oversimplify topics that require volumes to address in detail. This even
includes my own field of philosophy; I’ll often be squeezing important debates
that span centuries of time and volumes of texts into a single page or even a sin-
gle paragraph. I’m painfully aware of the extent to which these topics are being
condensed and often oversimplified. However, given that I am trying to present
a broad theoretical framework here in an accessible and compact format, I hope
that the philosophers will forgive me as well.
The goal of interdisciplinary research is to tie together work in different fields
in a new way, leading to unexpected results. I’ve tried to write the book at a
level that makes it accessible to any motivated reader. Mathematical details and
intimidating symbols have been left out or pushed to endnotes. Any graphs or
formal expressions that I’ve left in the main text should be accessible to anyone
with even the most basic math background, and I encourage you to spend a
minute or two working through them. Many people don’t have the time or the
attention span to read an entire book. Don’t feel bad about it. I’ve tried to make
each chapter somewhat independent, so feel free to skip around. For better or
worse, every citizen of the industrialized world must now take an interest in
robot decision-making, and the people working on them should be trying to
bring the conversation to the public as much as possible.
1
MORAL PSYCHOLOGY
Harry Truman had only been the vice-president for 82 days when, on April
12, 1945, Franklin Roosevelt died. Truman became the 33rd president at the
end of World War II, when the Japanese Empire had been pushed back to its
mainland islands, and preparations were being made for the Allied invasion of
Japan. The planned invasion, called Operation Downfall, was estimated to last
for years and cost perhaps half a million American lives, along with millions
more Japanese casualties.
Just 13 days after taking office, the U.S. Secretary of War, Henry Stimson,
sent Truman a letter describing a “secret matter” that “I think you ought to
know about … without delay.” This secret was the atomic bomb, which had
been the product of secret research conducted at the Los Alamos research site.
Within just a few months of learning about what he called “the most terrible
bomb in the history of the world,” Truman now faced a monumental decision:
would he allow the planned invasion of Japan to continue its deadly course,
costing millions of lives, or deliberately drop atomic bombs on Japanese cities,
at a cost of maybe only a few hundred thousand lives, to force the enemy into
unconditional surrender?
Truman chose to drop the atomic bomb, first on Hiroshima (August 6) and
then on Nagasaki (August 9). The death tolls are estimated at 90,000–120,000
for Hiroshima, and 60,000–80,000 for Nagasaki, the majority of both being
civilian deaths. Thousands of innocent men, women, and children were imme-
diately killed by the blast, and thousands more died over the following weeks
from radiation exposure. On August 12, Japan declared unconditional surrender.
There has been intense debate since the bombing as to whether the decision was
morally acceptable. Public opinion has shifted dramatically over time, with Pew
polls suggesting that 85 percent of Americans approving of the bombing in 1945,
but only 57 percent of Americans approving in 2016. Critics argue that Japan
8 Moral psychology
was already looking for a way to end the war, and that there were other options
that could have forced them to surrender without dropping additional bombs.
For the sake of this discussion, let’s assume that these really were the only two
options: either invade Japan at a cost of millions of lives, or drop bombs on two
mostly civilian populations at a cost of hundreds of thousands of lives. Which is
the right decision?
To those of you familiar with moral philosophy, you’ll recognize a similarity
here to one of the most famous thought-experiments in ethics, called the trolley
problem, originally developed by philosophers Philippa Foot (1967) and Judith
Thomson (1976). In the standard version of the trolley problem, a runaway train
is heading towards five people, but you can save them by diverting the train to
a side-track. Unfortunately, there is a single person on the side-track who will
certainly die as a result of pulling the switch (Figure 1.1).
There’s a reason why the trolley problem has been a useful tool for philoso-
phers and psychologists; dilemmas like these can reveal the inner workings of
the way that we make moral judgments. According to Mikhail’s (2011) extensive
cross-cultural surveys, most people surveyed on this question think the right
decision is to pull the switch. You might think that the reason for pulling the
switch is obvious: pick the action that saves the most lives. However, when pre-
sented with an alternate scenario, where it’s necessary to push a large man in
front of the train to save five people (Figure 1.2), almost everybody rejects it as
morally wrong. This is strange, since the “save the most lives” principle predicts
that pushing a man in front of a train is no different from diverting the train to
kill a pedestrian; both actions are sacrificing one to save five. It’s obvious from
this example that actual moral judgments are more complicated than we thought.
Trolley Problem (Bystander)
States P1 P2 P3 P4 P5 P6
Do Nothing 0 –99 –99 –99 –99 –99
Pull Switch –99 0 0 0 0 0
FIGURE 1.1
Payoffs in the Trolley Problem (bystander version), measured in terms
of changes from each player’s current state. For instance, 0 means you
experience no change from your current state, while –100 is a maximal
loss from your current state. For now, these numbers don’t matter, we can
call them a percentage of loss. Let’s say that getting hit by a train will lead
to a 99 percent chance of losing everything that’s important. P1 is the
person on the side-track, and P2–P6 are the five people on the main track.
Moral psychology 9
States P1 P2 P3 P4 P5 P6
Do Nothing 0 –99 –99 –99 –99 –99
Pull Man –99 0 0 0 0 0
FIGURE 1.2 Payoffs in the Trolley Problem (footbridge version). P1 is the man on the
bridge, and P2–P6 are the five people on the main track.
This chapter will give a broad survey of the universal categories and rules that
people use when making moral judgments. We’ll see that moral judgments are a
unique system in the human mind, distinct from social conventions, emotions,
and religious beliefs. I’ll be calling this a functional network to emphasize that our
system for making moral judgments is probably a collection of older psychologi-
cal capacities that have been co-opted over time for a single practical purpose.
Many researchers who are developing ethics algorithms for machines view moral
judgment as a cognitive trait like language, and the obvious way to implement
this trait in a machine is to simulate our moral grammar. I’ll try to show how
our moral psychology carries along with it many useful features, but also many
unfortunate ones that we don’t want to program into robots. The following
chapters will develop a framework for determining exactly which of these fea-
tures are useful or unfortunate.
Moral grammar
Any survey of human universals, like that compiled by anthropologist Donald
Brown (1991), will include the fact that all human beings judge some actions as
wrong and others as permissible. It’s not hard to think of actions that most of us
intuitively categorize as morally wrong: homicide, battery, rape, abuse, discrim-
ination, cheating, and massive deception. These judgments are made quickly,
automatically, and without effort. If I were to ask you why you think that cheat-
ing is wrong, you might be baff led and find it hard to answer. You might even
say: “It just is!” This doesn’t mean that there are no rules for evaluating right and
wrong, but it suggests that these rules are unconscious, like asking someone why
they perceive a table as rectangular or why they accept a sentence as grammatical.
The job of moral psychology is to discover how people understand the categories
of wrong and acceptable, and how they sort actions into these categories.
10 Moral psychology
Skeptics about morality may question whether there is a unique kind of thing
called a moral judgment. Here are some ways of thinking that morality is really
something else entirely:
It’s true that social conventions, religion, and emotions are strongly associated
with moral judgments. But, as social scientists say, correlation isn’t causation. In
order to talk about morality, we’ll start by distinguishing it from these closely
related phenomena.
Social conventions like “men have short hair and women wear dresses” are
used to guide people’s behavior, but the rules themselves are arbitrary and vary
widely from culture to culture. A famous (and probably apocryphal) story of
the Persian Emperor Darius describes how the ancient Indians in his court
judged eating their dead parents to be a sign of great respect, while the ancient
Greeks found it to be a sign of great offense. The typical conclusion from sto-
ries like this is that morality is an arbitrary set of rules, and there’s no common
ground we can use to settle disagreements, so we should just be tolerant of all
moral beliefs.
In response to this story, the philosopher James Rachels (2009) points out
that there is a lot more agreement about moral beliefs than social conventions.
In the story of Darius, both the Indians and Greeks agreed on the more general
rule: honor your parents. They just disagreed about which practice is the best
way of honoring them, eating or not eating them. It’s important to keep in
mind how a person’s empirical beliefs about how the world works can dramati-
cally change her moral judgments. For example, torturing an innocent person
may look like a barbaric practice to us, but if the people doing it genuinely
believe that this action is saving the person’s immortal soul from an eternity
of torture, or saving an entire society from the wrath of the gods, then the
judgment begins to look less crazy. Ask yourself how your moral beliefs would
change if you were to genuinely believe that a cow could contain the soul of
your dead parents, or that some group intentionally caused the diseases your
family is experiencing. Barbaric actions often turn out to be based on the same
moral principles that we endorse, but tragically incorrect information about
how the world works.
Even if a distant society approves of slavery and child abuse, hopefully you
still think these actions are morally wrong. Tolerance is a good attitude to have
about social conventions, but it’s a disastrous policy to have about moral beliefs.
Several decades ago, the developmental psychologist Elliot Turiel (1983) and
his colleagues conducted experiments where some groups of children were told
stories about conventional rule violations (“Is it okay to wear pajamas to school if
Moral psychology 11
teacher says so?”) and other groups of children were told stories about moral rule
violations (“Is it okay to hit another student if teacher says so?”). Five-year-old
children tend to accept that social norm violations are acceptable when a teacher
approves of it, but actions like hitting another child are typically judged to be
wrong, even if an authority or other community approves.
This distinction between ethics and authority is something advocated by phi-
losophers from Plato to Kant. It’s true that there are laws against attacking stran-
gers, but it would be ridiculous to say that the reason you don’t punch strangers
in the face is that you’re worried about getting arrested! Similarly, there are reli-
gious commands against murder, lying, and stealing, but it’s strange to say that
Christians or Muslims would suddenly become more violent if not for religious
laws. The rate of violent behavior among atheists is no different than the rate
of violence among religious followers, and there is no difference between athe-
ists and religious people in their responses to scenarios like the trolley problem.
In 2011, U.S. Senator Trent Franks accidentally made this point clear when he
defended the motto of the United States, “In God We Trust,” by arguing that
without a trust in God, the country would collapse into violent chaos:
An atheist state is as brutal as the thesis that it rests upon, and there is no
reason for us to gather in this place [the U.S. Chamber of Commerce], we
should just let anarchy prevail, because after all, we are just worm food.
This speech was mocked by The Daily Show host Jon Stewart, who continued to
fill in the logical consequences of the senator’s comment:
I guess what I’m saying here, Mister Speaker, is that this four-word motto
is right now the only thing standing between me and a nihilistic killing
spree of epic proportions. Seriously, I just want to state for the congres-
sional record: I do not know right from wrong.
The audience laughed at this joke because they recognized that basing your
moral beliefs entirely on what someone else tells you is a childish attitude which
leads to absurd conclusions. In a dramatic example, Immanuel Kant (1798) wrote
of the story of Abraham and Isaac:
[In] the myth of the sacrifice that Abraham was going to make by butcher-
ing and burning his only son at God’s command … Abraham should have
replied to this supposedly divine voice: “That I ought not kill my good son
is quite certain. But that you, this apparition, are God – of that I am not
certain, and never can be, not even if this voice rings down to me from the
(visible) heaven.”
Kant was an extremely devoted religious believer, and he’s saying to basically
ignore the commands of God when it comes to ethics. Millions of religious
12 Moral psychology
believers also regularly ignore the teachings of authority figures over issues
they believe are right or wrong. Despite the Catholic Church insisting that
contraception is morally wrong, a 2014 Pew poll found that 79 percent of
Catholics believe it to be permissible. Drastic changes in moral beliefs seem to
have no obvious connection to religious beliefs; between 2007 and 2014, Pew
research shows that all religious and non-religious groups in the United States
changed their views about the permissibility of homosexuality at roughly the
same rate. Even Mormons and Evangelical Christians show a change from
24–26 percent to 36 percent acceptance of homosexuality, despite no clear
difference in the commands of their churches. Religion certainly has an inf lu-
ence on what sorts of information people are exposed to, but there’s no reason
to think this has any greater inf luence on moral beliefs than other sources of
information. There are similar religious differences in beliefs about evolution
and global warming, but it’s silly to think that religion is the cause of beliefs
about global warming, or that beliefs about global warming are nothing more
than religious attitudes.
What about emotions? If I judge an action to be morally wrong, I will almost
certainly feel upset about it, angry at people who do it, and motivated to avoid
doing that action. You would think it’s crazy if someone believes an action is
morally wrong and feels happy about people doing it. Philosophers like David
Hume (1738) have used this connection to argue that emotions are the ultimate
cause of moral judgments. His argument is that morality always involves motiva-
tion, and the only source of motivation is the emotions, so morality must origi-
nate (at least in part) from human emotions. As Hume summarizes:
[I]t is impossible that the distinction betwixt moral good and evil can be
made by reason; since that distinction has an inf luence on our actions, of
which reason alone is incapable.
I agree with Hume that the emotions play an important role in moral judgments,
but there’s a big difference between playing a role in morality and being all there is
to morality. An example of this radical view is found in A.J. Ayer’s theory (1936)
that words like wrong are nothing more than expressions of emotions:
Ayer’s speculation about the meaning of words like ought and wrong has turned out
to be incorrect. Our best linguistic theories about the meaning of words like ought
show that they are operators acting like the words all and some. There are patterns
Moral psychology 13
in the way that people use moral terms that are identical to patterns in the way
people use all and some. For example, everyone accepts the inferences:
It turns out that if we make the connection: [all = obligatory] and [some = per-
missible], then the same patterns show up in inferences with moral terms:
Why would it be that patterns in moral terms act like all and some? The typical
answer that linguists like Angelika Kratzer (1977) suggest is that these words
involve the same basic operations, called universal and existential quantifica-
tion. Call these operators P for “permissible” and O for “obligated,” so that
O[ Jonathon is in his office] means “Jonathon is obligated to be in his office.”
There is an entire field called deontic logic devoted entirely to the study of these
operators. We won’t get into the details here, but the point is that any informa-
tion under the scope of these kinds of operators has to have a very specific kind of
structure that involves predicates and connectives. This shows that moral judg-
ments must be more than merely emotional responses, although emotions may
play a very important role in determining how certain actions are classified as
permissible or required (this is similar to an argument initially developed by the
philosophers Gottlob Frege and Peter Geach).
Instead of thinking about moral judgments as social conventions, religious
beliefs, or emotional responses, a better analogy is to think of them as a functional
network like human language. In his book, Elements of Moral Cognition (2011),
John Mikhail describes this analogy in detail. Just like our quick and automatic
responses about grammar, judgments about the permissibility of actions are the
product of a set of rules about which speakers are largely unconscious. Mikhail
calls this set of rules moral grammar. The study of moral grammar investigates
the categories and rules used by speakers to move from perceptions of actions to
judgments about permissibility or impermissibility.
One way that the linguistic analogy is helpful is that it shows how we can
appeal to our own intuitions as evidence about the unconscious psychological
14 Moral psychology
processes that produced them. This is one of the many changes that Noam
Chomsky brought to linguistics during the 1950s and 1960s. The analogy also
shows how speakers can be making use of unconscious rules that they themselves
can’t explicitly describe. For example, as competent English speakers, we can
easily transform the sentence John is running into Is John running? but most of us
are incapable of explaining the rule for this: “Move the first aux verb after the
subject to the front; if there is no aux verb, insert a do/does.” This is fascinating:
it’s a rule that we all use but can’t explicitly articulate. Moral rules seem to have
a similar ineffable quality to them; they’re easy to use but hard to explain. As
U.S. Supreme Court Justice Potter Stewart famously remarked about obscenity:
I shall not today attempt further to define the kinds of material I under-
stand to be expressed within that shorthand description [hardcore pornog-
raphy], and perhaps I could never succeed in intelligibly doing so. But I
know it when I see it…
As much as this remark has been mocked over the years, it’s the same quality
that linguistic rules have: I can’t articulate what makes a sentence of my native
language grammatical or ungrammatical, but I know a well-constructed one
when I see it. This isn’t because the rules are part of some magical realm that we
can only detect with extrasensory perception. Instead, it’s because these rules
are structures in our minds and brains; we have conscious access only to their
outputs. With enough careful study, we can hypothesize about and reconstruct
what rules our brains are using to form grammatical sentences and sort actions
into categories of wrong and acceptable.
In human languages, it is truly incredible how a massive amount of variation
can be generated from toggling parameters on a few simple underlying rules.
This insight can be framed as a way of answering the “nature/nurture” question
for a cognitive trait. As Steven Pinker (2002) points out, the boring (and almost
trivially true) answer to this question is “a little bit of both,” but the interesting
answer is describing in detail exactly how innate features of the human mind
enable parts of language to be acquired through experience. According to the
linguistic analogy, some components of our moral judgments, like what objects
are valuable and which effects are harmful, may be acquired through emotional
responses and cultural norms. However, the way that valuable objects and harm-
ful effects are framed within a system of rules may be constrained by only a lim-
ited set of configurations determined by the structure of our moral psychology.
As the cognitive scientist David Marr (1981) argued, there are at least three
distinct levels of explanation for a cognitive trait like language or moral judg-
ment: (1) the way a system is implemented, (2) the categories and rules it uses,
and (3) the goal of the system. For example, a cash register is implemented in
the hardware of the machine, it uses a few basic computational rules, and the
function of the machine is to report prices and exchange money. If morality is a
functional network of the human mind, we want to know how it’s implemented
in the brain, the rules that it’s using to evaluate actions, and what its historical
Moral psychology 15
function is. I’ll have nothing to say about how moral judgments are implemented
in the human brain, but the rest of this chapter will discuss some of the abstract
entities and rules that the moral network uses, and the next chapter will look at
its evolutionary and historical function.
AGENT
PATIENT
INTEND
CAUSE
STATE
HARM
Like any good theory, we want to get the smallest number of elements necessary
to build up all the bigger structures from them. A good theory of chemistry will
have elements like hydrogen and oxygen, then build larger structures like water
molecules. Similarly, the hope of a computational theory of moral grammar is
that all moral beliefs and even more complex concepts like innocence and consent
can be built as molecules from these basic elements.
Let’s start with AGENT and PATIENT. Just like it’s hard to build a sentence
without a noun and a direct object, it’s equally hard to make a moral judgment
without an agent and a patient. Roughly, agents are the ones who perform the
action and patients are the ones who experience its effects. In their book, The
Mind Club (2016), psychologists Daniel Wegner and Kurt Gray describe some
of the features that humans use to detect what objects are agents and patients.
While people reliably identify normal adult humans as both agents and patients,
some objects are only identified as agents (gods and robots), while others are only
identified as patients (cute animals and babies). Wegner and Gray hypothesize
that features like perceived power and control are essential for identifying an
agent, while patients are picked out by movement at a human-like speed, having
human-like facial features, and reacting to pain in a human-like way. A patient
might also be identified because she is similar, familiar, or genetically related.
In a startling example of how genetic relatedness inf luences our moral judg-
ments, the biologist April Bleske-Rechek and her colleagues (2010) used the trol-
ley problem to modulate the relationship between the agent and people harmed,
varying the victims by sex (M/F), age (2, 20, 45, 70), and relatedness (stran-
ger, cousin, uncle/aunt, grandfather/grandmother, son/daughter, brother/sister,
mother/father). Participants were then asked: “Would you f lip the switch in this
16 Moral psychology
Actions are more than just: AGENT CAUSE PATIENT. Instead, we care
about a part of the agent’s mind doing the work, and we can label that part of
her mind with the category INTEND. Imagine the difference between seeing
someone stumble and fall on a stranger, compared with the same person inten-
tionally knocking the stranger down. Intention is the difference between murder
and manslaughter, between harms done accidentally and purposely. Hurricanes and
other natural disasters create more destruction than any serial killer, yet we don’t
view them as responsible for their actions because they don’t have intentions. It’s
easy enough to recognize how important intentions are, but any lawyer will tell
you that it’s hard to establish when a person has one! We typically use cues like
behavior, statements, and character traits to establish intent, but these aren’t always
reliable. Intention seems to be more than just a desire or motivation, and somehow
connected to actual plans in a concrete way.
Intentions are not only important for agents, but also for determining which
states are bad for patients. The concept of consent is produced when we consider
the intentions of a patient. Specifically, when a patient intends to be in the state
that the agent causes him to be in, that’s probably what most speakers mean by an
action being consensual. Actions like euthanasia, employment, and sex between
willing partners are often different from murder, slavery, and rape entirely on the
basis of consent. Just like verbal cues, behavior, and character are used to establish
the intentions of the agent, the same kinds of evidence are often used to establish
the consent of the patient. It should be noted that establishing a patient’s consent
is just as tricky as establishing an agent’s intention, and many debates in ethics and
law surround the conditions under which consent is or isn’t present.
Finally, in addition to agents, patients, causes, and intentions, people are sensitive
to an element that can be called HARM. The trolley problem is limited in that the
same type of harm is always involved (physical harm). You might suspect that moral
judgments often involve different types of harm, and you’d be correct. Some actions
result in psychological and emotional damage, or destruction to people’s reputation,
social standing, or relationships. Lying, cheating, and stealing are often judged to be
morally wrong, even if they don’t necessarily involve direct physical harm. To give
a physics analogy, the element HARM is more like a vector than a scalar: it doesn’t
just have a magnitude but also a direction in any number of possible dimensions of
harm. If you thought that it was difficult to give precise conditions for what counts
as an agent and a patient, or when intentions are present, you’ll find it just as difficult
to define and measure dimensions and magnitudes of harm.
The psychologist Jonathan Haidt (2012) denies that harm is an essential element
in moral thinking, insisting that Western academics have narrowed the definition
of morality to only those judgments involving harm but neglecting other important
features of moral judgments like purity, authority, and sanctity. In my view, what
we’re calling HARM is not in conf lict with disgust, since the two exist at different
levels of explanation. HARM is an element in moral grammar used to measure
the damage done to a patient; it can be instantiated by evaluations of physical suf-
fering from a wide range of sources, including a projection of suffering from the
18 Moral psychology
evaluator’s own disgust and purity judgments. This is where Hume was correct;
people typically use their own emotional responses to evaluate harm. However, it’s
important that these emotions are also projected onto the agent, which is the dif-
ference between “that’s gross” and “that’s wrong.” Experiments by Kurt Gray and
colleagues (2014) suggest that even moral evaluations based on purity are always
projected onto an implicit victim who suffers some damage. These victims might
be people in alternate imagined realities, like the potential victims of drunk driv-
ers or the potential children that might have existed from continuing a pregnancy.
The patient is viewed as having a part of their identity damaged or destroyed,
even if this entity is entirely a projection of the speaker’s own emotional responses.
Unfortunately, this is another inconsistent and inaccurate way humans apply moral
rules that often leads to disastrous consequences.
Harmful battery
An agent’s intention causes a patient to be in a state by physical contact,
the state is physically damaging to the patient, and the patient doesn’t
intend to be in that state.3
There are three clauses in this definition (separated by and): one involving causa-
tion, the other involving harm, and the third involving consent. For instance, a
criminal stabbing a random innocent person in the leg is an obvious case of battery,
but a surgeon creating an incision in a patient’s leg is not battery (even though it’s
causing the same state), because the patient has consented to the contact and it’s
not perceived as harm. The bystander who switches the train to a side-track causes
the person on the side-track to be in a state of harm by pulling the switch, but this
doesn’t count as harmful battery, because it wasn’t caused by direct physical force.
You can already start to imagine all the possible variations of battery that can be
generated by toggling the settings on these elements. Many of these are also estab-
lished legal entities like “offensive battery,” which toggles settings on the kinds of
harm (physical vs. psychological):
Offensive battery
An agent’s intention causes a patient to be in a state by physical con-
tact, the state is psychologically distressing to the patient, and the patient
doesn’t intend to be in that state.4
Moral psychology 19
We could also toggle the settings on CAUSE between physical contact and
something more indirect, where an agent causes physical harm to the patient
through not performing an action, which is called an omission. The agent per-
forms an omission whenever she could have intervened in nearby counterfactual
situations but doesn’t, like allowing someone to drown when she could easily
save them:
Harmful negligence
An agent’s intention causes a patient to be in a state by not performing
an action, the state is physically damaging to the patient, and the patient
doesn’t intend to be in that state.5
We won’t get into the details about all these variations. I’ll just note that they are
much like the variations in natural languages: minor changes on a shared deep
structure. We also won’t worry too much about the exact structure of the concepts
and rules. I agree with Mikhail that battery is a paradigm case of actions judged to
be wrong, and that other wrong actions like homicide and rape can be generated by
small variations in the actions defined here. It’s also useful to show how this can gen-
erate the idea of an innocent person with respect to battery: a person is innocent of
battery when they haven’t intentionally caused it, even if they might still intend the
patient to be in a bad state. Importantly, just wanting someone to experience harm
might be permissible, but causing them to experience harm is wrong. As fictional
detectives emphasize, there’s nothing illegal about just wanting someone dead.
Under what conditions are actions like battery and homicide judged to be
wrong? We could define a simple rule: “If [harmful battery], then wrong,” but
that’s not enough to get at the complexities of people’s beliefs. Almost everyone
allows for battery in cases of punishment and self-defense. A more sophisticated
rule that includes these cases would be a conditional that restricts the prohibition
on battery to innocent people. Call this the Intentional Harm Rule:
This rule essentially says: don’t intentionally cause harm to innocent people
without their consent. But our definition doesn’t specify anything about guilty
people, and we wanted to explain things like punishment and self-defense.
Thus, we need to construct another rule within our system that accounts for the
permissibility of retribution; call it the Retribution Rule:
Retribution Rule
If an agent has caused battery to you, then it is permissible to cause battery
to that agent.7
20 Moral psychology
The Retribution Rule can be toggled to form many varieties, like self-defense,
revenge, and punishment. Despite the fact that the biblical Jesus claims that
both punishment and revenge are impermissible, most Christians have histori-
cally ignored this teaching in their own moral judgments. This is yet another
example of religious believers ignoring the commands of authorities about
moral judgments.
Let’s assume that these two rules, along with all the associated variations
produced by toggling parameters, adequately explain human moral judgments.
Following the linguistic analogy, we can think of these variations as different
moral languages that are each derived from a set of universal elements.
We start by supposing that through the course of one’s life, one will acquire
attachments for various people or even groups of people. These attachments
and feelings can be represented through the vector introduced in the previ-
ous section. As mentioned in the introduction, these values could come from
empathy and emotional responses, imagination and stories, morally charged
analogical deliberation, love, contact, exposure etc.
Other documents randomly have
different content
“Why?” and the worthy Morgeson laughed sweetly—“I see, my dear
Mr Tempest, you are like most men of genius—you do not
understand business. The reason why we give the first two hundred
and fifty copies away is in order to be able to announce at once in all
the papers that ‘The First Large Edition of the New Novel by
Geoffrey Tempest being exhausted on the day of publication, a
Second is in Rapid Preparation.’ You see we thus hoodwink the
public, who of course are not in our secrets, and are not to know
whether an edition is two hundred or two thousand. The Second
Edition will of course be ready behind the scenes, and will consist of
another two hundred and fifty.”
“Do you call that course of procedure honest?” I asked quietly.
“Honest? My dear sir! Honest?” And his countenance wore a
virtuously injured expression—“Of course it is honest! Look at the
daily papers! Such announcements appear every day—in fact they
are getting rather too common. I freely admit that there are a few
publishers here and there who stick up for exactitude and go to the
trouble of not only giving the number of copies in an Edition, but
also publishing the date of each one as it was issued,—this may be
principle if they like to call it so, but it involves a great deal of
precise calculation and worry! If the public like to be deceived, what
is the use of being exact! Now, to resume,—your second edition will
be sent off ‘on sale or return’ to provincial booksellers, and then we
shall announce—“In consequence of the Enormous Demand for the
new novel by Geoffrey Tempest, the Large Second Edition is out of
print. A Third will be issued in the course of next week.” And so on,
and so on, till we get to the sixth or seventh edition (always
numbering two hundred and fifty each) in three volumes; perhaps
we
99 can by skilful management work it to a tenth. It is only a
2 The author has Mr Knowles’s own written authority for this fact. Back
107
X
her, set her down in her chair again and clapped his hands. She
came to directly, and didn’t know a bit what she’d been doing. Then
twenty-two bell rang again, and the fellow rolled up his eyes like a
clergyman and said, ‘Let us pray!’ and off he went.”
I laughed.
“He seems to have a share of humour at anyrate,”—I said; “I should
not have thought it of him. But do you think these antics of his are
mischievous?”
“Well that scullery girl is very ill to-day,”—replied Morris; “I expect
she’ll have to leave. She has what she calls the ‘jumps’ and none of
us dare tell her how she got them. No sir, believe me or not as you
like, there’s something very queer about that Amiel. And another
thing I want to know is this—what does he do with the other
servants?”
“What does he do with the other servants?” I repeated bewilderedly
—“What on earth do you mean?”
“Well sir, the prince has a chef of his own hasn’t he?” said Morris
enumerating on his fingers—“And two personal attendants besides
Amiel,—quiet fellows enough who help in the waiting. Then he has a
coachman and groom. That makes six servants altogether. Now none
of these except Amiel are ever seen in the hotel kitchens. The chef
sends all the meals in from somewhere, in a heated receptacle—and
the two other fellows are never seen except when waiting at table,
and they don’t live in their own rooms all day, though they may
sleep there,—and nobody knows where the carriage and horses are
put up, or where the coachman and groom lodge. Certain it is that
both they and the chef board out. It seems to me very mysterious.”
I began to feel quite unreasonably irritated.
“Look here, Morris,” I said—“There’s nothing more useless or more
harmful than the habit of inquiring into other people’s affairs. The
prince has a right to live as he likes, and do as he pleases with his
servants—I am sure he pays royally for his privileges. And whether
his
125 cook lives in or out, up in the skies or down in a cellar is no
matter of mine. He has been a great traveller and no doubt has his
peculiarities; and probably his notions concerning food are very
particular and fastidious. But I don’t want to know anything about
his ménage. If you dislike Amiel, it’s easy to avoid him, but for
goodness sake don’t go making mysteries where none exist.”
Morris looked up, then down, and folded one of my coats with
special care. I saw I had effectually checked his flow of confidence.
“Very well, sir,”—he observed, and said no more.
I was rather diverted than otherwise at my servant’s solemn account
of Amiel’s peculiarities as exhibited among his own class,—and when
we were driving to Lord Elton’s that evening I told something of the
story to Lucio. He laughed.
“Amiel’s spirits are often too much for him,”—he said—“He is a
perfect imp of mischief and cannot always control himself.”
“Why, what a wrong estimate I have formed of him!” I said—“I
thought he had a peculiarly grave and somewhat sullen disposition.”
“You know the trite saying—appearances are deceptive?” went on
my companion lightly—“It’s extremely true. The professed humourist
is nearly always a disagreeable and heavy man personally. As for
Amiel, he is like me in the respect of not being at all what he seems.
His only fault is a tendency to break the bounds of discipline, but
otherwise he serves me well, and I do not inquire further. Is Morris
disgusted or alarmed?”
“Neither I think,” I responded laughing—“He merely presents himself
to me as an example of outraged respectability.”
“Ah then, you may be sure that when the scullery-maid was dancing,
he observed her steps with the closest nicety;” said Lucio—“Very
respectable men are always particular of inspection into these
matters! Soothe his ruffled feelings, my dear Geoffrey, and tell him
that
126 Amiel is the very soul of virtue! I have had him in my service for
a long time, and can urge nothing against his character as a man.
He does not pretend to be an angel. His tricks of speech and
behaviour are the result of a too constant repression of his natural
hilarity, but he is really an excellent fellow. He dabbled in hypnotic
science when he was with me in India; I have often warned him of
the danger there is in practising this force on the uninitiated. But—a
scullery-maid!—heavens!—there are so many scullery-maids! One
more or less with the ‘jumps’ will not matter. This is Lord Elton’s.”
The carriage stopped before a handsome house situated a little back
from Park Lane. We were admitted by a man-servant gorgeous in
red plush, white silk hose and powdered wig, who passed us on
majestically to his twin-brother in height and appearance, though
perhaps a trifle more disdainful in bearing, and he in his turn
ushered us upstairs with the air of one who should say “See to what
ignominious degradation a cruel fate reduces so great a man!” In the
drawing-room we found Lord Elton, standing on the hearth-rug with
his back to the fire, and directly opposite him in a low arm chair,
reclined an elegantly attired young lady with very small feet. I
mention the feet, because as I entered they were the most
prominent part of her person, being well stretched out from beneath
the would-be concealment of sundry flounced petticoats towards the
warmth of the fire which the Earl rather inconsiderately screened
from view. There was another lady in the room sitting bolt upright
with hands neatly folded on her lap, and to her we were first of all
introduced when Lord Elton’s own effusive greetings were over.
“Charlotte, allow me,—my friends, Prince Lucio Rimânez—Mr
Geoffrey Tempest; gentlemen, my sister-in-law, Miss Charlotte
Fitzroy.”
We bowed; the lady gave us a dignified bend of the head. She was
an imposing looking spinster, with a curious expression on her
features which was difficult to construe. It was pious and prim, but it
also
127 suggested the idea that she must have seen something
excessively improper once in her life and had never been able to
forget it. The pursed-up mouth, the round pale-coloured eyes and
the chronic air of insulted virtue which seemed to pervade her from
head to foot all helped to deepen this impression. One could not
look at Miss Charlotte long without beginning to wonder irreverently
what it was that had in her long past youth so outraged the cleanly
proprieties of her nature as to leave such indelible traces on her
countenance. But I have since seen many English women look so,
especially among the particularly ‘high bred,’ old and plain-featured
of the “upper ten.” Very different was the saucy and bright
physiognomy of the younger lady to whom we were next presented,
and who, raising herself languidly from her reclining position, smiled
at us with encouraging familiarity as we made our salutations.
“Miss Diana Chesney,”—said the Earl glibly—“You perhaps know her
father, prince,—you must have heard of him at any rate—the famous
Nicodemus Chesney, one of the great railway-kings.”
“Of course I know him”—responded Lucio warmly—“Who does not! I
have met him often. A charming man, gifted with most remarkable
humour and vitality—I remember him perfectly. We saw a good deal
of each other in Washington.”
“Did you though?” said Miss Chesney with a somewhat indifferent
interest,—“He’s a queer sort of man to my thinking; rather a cross
between the ticket-collector and custom-house officer combined, you
know! I never see him but what I feel I must start on a journey
directly—railways seem to be written all over him. I tell him so. I say
‘Pa, if you didn’t carry railway-tracks in your face you’d be better
looking.’ And you found him humorous, did you?”
Laughing at the novel and free way in which this young person
criticised her parent, Lucio protested that he did.
“Well I don’t,”—confessed Miss Chesney—“But that may be because
I’ve heard all his stories over and over again, and I’ve read most of
them in books besides,—so they’re not much account to me. He tells
some
128 of them to the Prince of Wales whenever he can get a chance,
—but he don’t try them off on me any more. He’s a real clever man
too; he’s made his pile quicker than most. And you’re quite right
about his vitality,—my!—his laugh takes you into the middle of next
week!”
Her bright eyes flashed merrily as she took a comprehensive survey
of our amused faces.
“Think I’m irreverent, don’t you?” she went on—“But you know Pa’s
not a ‘stage parent’ all dressed out in lovely white hair and
benedictions,—he’s just an accommodating railway-track, and he
wouldn’t like to be reverenced. Do sit down, won’t you?”—then
turning her pretty head coquettishly towards her host—“Make them
sit down, Lord Elton,—I hate to see men standing. The superior sex,
you know! Besides you’re so tall,” she added, glancing with
unconcealed admiration at Lucio’s handsome face and figure, “that
it’s like peering up an apple-tree at the moon to look at you!”
Lucio laughed heartily, and seated himself near her—I followed his
example; the old Earl still kept his position, legs a-straddle, on the
hearth-rug, and beamed benevolence upon us all. Certainly Diana
Chesney was a captivating creature; one of those surface-clever
American women who distinctly divert men’s minds without in the
least rousing their passions.
“So you’re the famous Mr Tempest?” she said, surveying me critically
—“Why, it’s simply splendid for you isn’t it? I always say it’s no use
having a heap of money unless you’re young,—if you’re old, you only
want it to fill your doctor’s pockets while he tries to mend your poor
tuckered-out constitution. I once knew an old lady who was left a
legacy of a hundred thousand pounds when she was ninety-five.
Poor old dear, she cried over it. She just had sense enough to
understand what a good time she couldn’t have. She lived in bed,
and her only luxury was a halfpenny bun dipped in milk for her tea.
It was all she cared for.”
“A hundred thousand pounds would go a long way in buns!” I said
smiling.
“Wouldn’t
129 it just!” and the fair Diana laughed—“But I guess you’ll
want something a little more substantial for your cash Mr Tempest! A
fortune in the prime of life is worth having. I suppose you’re one of
the richest men about just now, aren’t you?”
She put the question in a perfectly naïve frank manner and seemed
to be unconscious of any undue inquisitiveness in it.
“I may be one of the richest,”—I replied, and as I spoke the thought
flashed suddenly across me how recently I had been one of the
poorest!—“But my friend here, the prince, is far richer than I.”
“Is that so!” and she stared straight at Lucio, who met her gaze with
an indulgent, half satirical smile—“Well now! I guess Pa’s no better
than a sort of pauper after all! Why, you must have the world at
your feet!”
“Pretty much so,”—replied Lucio composedly—“But then, my dear
Miss Chesney, the world is so very easily brought to one’s feet.
Surely you know that?”
And he emphasized the words by an expressive look of his fine eyes.
“I guess you mean compliments,”—she replied unconcernedly—“I
don’t like them as a rule, but I’ll forgive you this once!”
“Do!” said Lucio, with one of his dazzling smiles that caused her to
stop for a moment in her voluble chatter and observe him with
mingled fascination and wonderment.
“And you too are young, like Mr Tempest,”—she resumed presently.
“Pardon me!” interrupted Lucio—“I am many years older.”
“Really!” exclaimed Lord Elton at this juncture—“You don’t look it,
does he Charlotte?”
Miss Fitzroy thus appealed to, raised her elegant tortoise-shell-
framed glasses to her eyes and peered critically at us both.
“I should imagine the prince to be slightly the senior of Mr
Tempest”—she remarked in precise high-bred accents—“But only
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com