spoken applied ethics
spoken applied ethics
It is tempting to rephrase the question “What should I do?” as “What can I do?” “What do I want to
1
do?” or “What do other people want me to do?” If you do this, you are confusing an ought with an is.
You are taking a normative question—in other words, a question that concerns what ought to be the
case or what ought to be done—and trying to replace it with a descriptive question about some fact
of the matter. In the case of the rephrased questions above, what is within my capacity, what I desire,
or what other people desire, respectively.
Changing the normative question from what I or we or you (or someone else) should do to a
descriptive question is, in effect, an effort to leave the ethical analysis up to someone else. The idea
that this is a way of avoiding ethics is, however, an illusion. Most decision-making has a moral
dimension. Part of being a mature, rational individual in a society is being accountable for one’s
decisions and actions. Even if we don’t make the effort to consider whether our actions are right or
wrong, others will.
This primer is a resource for helping you notice and attend to ethical issues and think your way
through them. The intent is to give you tools to help you figure out what you (or others) should do so
that you can weigh these moral considerations against what you can do, what you want to do, and
what others want you to do. Sometimes you will be fortunate enough to discover that the answers to
these questions line up and you are not faced with a problem. All too frequently, however, you will find
that if you really think about it, what you want to do or what others want you to do fail to accord with
what you should do. What you can do provides the limit of the actions that are open to you. However,
carefully considering ethical challenges can often help us revise our own sense of what is possible and
recognize that more may be within our power than we might have initially thought.
This primer will not tell you what to do. That’s up to you. Instead, it offers a variety of ways of thinking
about ethics for you to apply yourself. Again, rational, adult humans are and should be held
accountable for their actions. So, being able to articulate ethically sound reasons for your actions is
important for being able to defend yourself to others who might disagree with your choices and
behaviour.
Of course, context matters. This is one of the reasons why applied ethics subdisciplines abound.
Nonetheless, there are commonalities among these areas as they all engage and take guidance from
normative ethical theory. Normative ethics is the systematic study, development, and rational
defence of basic values, moral concepts, and ethical theories. Ethicists offer theories that explain why
some actions are right and others wrong and why some states of affairs, institutions or, indeed, people
are good and others bad.
For well over two thousand years, philosophers from around the globe have been writing down what
they take to be the right way to live and giving arguments for why we should act in one way or
another. Of course, the practice of ethics is considerably older than the written record and has been an
essential part of all human cultures for thousands of years. What we address here simply skims the
surface of a few of these theories from a handful of cultures. There is
Introduction | 1
a predominance of theories from the European tradition, which reflects the discourses that have
shaped most applied ethics written in English. This should not be taken to imply that the basic ideas in
these theories are uniquely European nor that they are in some sense superior to their non-European
counterparts.
We are currently in an era of post-colonial correction, and we can expect that many non-European
theories will increasingly inform applied ethics. Moreover, the basic approaches discussed below can
be found throughout ethical theories globally. So, along with some key figures and theories from
European ethics, we will discuss ideas from various so-called “non-Western” traditions.
So, what are the kinds of tools that moral philosophers can offer? First are generic philosophical tools
of careful criticism, including the analysis of important concepts, and argumentation. These are skills
that are crucial to any philosophical work, which students would acquire and practice in any philosophy
course. Second, there are the theories that moral philosophers have developed. Although there are
many different theories, we will organize them into four basic approaches that focus on different
things: consequences; action; character; and relationships. Many ethical theories actually touch on all
of these aspects but emphasize one of them as a central focus or starting point. Some moral
frameworks and concepts don’t neatly fit into any one of these four approaches, and we will discuss
two of these—rights theory and ahimsa—after the rest.
Rather than offering a set of arguments about why one theory is superior to another, we will treat our
four approaches as different lenses through which we can assess the various cases and situations that
2
attract our attention. Just as looking at a landscape through lenses that are tinted different colours
makes different features stand out, so thinking about ethical challenges through each of the basic
approaches draws attention to certain moral features of these situations. In this way, employing these
lenses can improve our moral perception, helping us notice and analyze ethical issues and envision
more effective ways of addressing them.
Now, one might think that one of these theories is in fact the correct account of morality. Indeed, many
normative ethicists take this view and spend their careers defending one or other moral theory. Even
so, it is still important to understand other theories to be able to sympathetically consider and assess
other people’s approaches to moral problems. Before we get to the ethical lenses in Part II, we will
reflect on the character of moral judgement and present some tools for argumentation and debate.
Notes
1. For a short video on the "is/ought problem", check out Nigel Warburton's, “The Is/Ought Problem.” BBC,
last modified November 18, 2014, https://www.bbc.co.uk/programmes/p02c7css.
2. Susan Sherwin, “Foundations, Frameworks, Lenses: The Role of Theories in Bioethics,” Bioethics 13, no. 3/4
(1999): 198-205.
2 | Introduction
Although emotions can be important and instructive by alerting us to moral issues, they are sometimes
not well justified on reflection. Indeed, in some instances, once we reflect on our emotions, we may
find that they are ethically quite misleading. Even positive emotions, like love, may lead us to misjudge
a situation, prompting us to defend friends or family members who have, in fact, behaved badly.
Negative emotions can be equally misleading. Most of us have had the experience of being in a fit of
anger and doing something (or at least thinking of doing something) that we later recognize was
morally wrong. The Roman historian Tacitus believed that many people have a tendency to hate those
1
whom they have injured. Our emotional reactions to our own bad behaviour might distort our
perception of our victims in ways that would make us prone to harm them yet further. This should
trouble anyone who is inclined to let their emotions govern their actions. Indeed, philosophical
traditions that foreground moral emotions tend to emphasize the importance of cultivating virtuous or
appropriate emotional responses (as we will see in Chapter 5).
If our emotions can be fallible guides to moral action, what else might we consider? We might think
about how others will judge our actions or how they would act were they in our place. Again, this can
be instructive in terms of alerting us to moral considerations (as we shall see in section 5.3 and section
6.1). Nonetheless, this is typically insufficient for coming to a justified moral judgement. There are
good reasons for this. There are many biases in our society and many people who behave badly. If we
simply judge as others judge and follow what others do or what they expect us to do, we may end up
making some terrible judgements and engaging in some heinous behaviour.
It can be deeply disturbing to discover that those who hold a respected place in our community or the
people we love have immoral attitudes or have engaged in morally repugnant behaviour. Nonetheless,
if we truly care about doing the right thing, we must be open to making such discoveries. We may
even discover that attitudes or conventions that are widely accepted in our society are nonetheless
morally pernicious.
Of course, many social conventions are perfectly morally acceptable. Some may even be morally
required. After all, conventional norms and practices offer a set of rules for behaviour that help the
members of society understand one another and fruitfully interact with each other. However, in order
to be able to distinguish conventions that are useful and good from those that are bigoted and bad we
need to go beyond the conventions themselves. This is where normative ethics, philosophical analysis
and argument come in.
Take a moment to consider a norm or a practice that was (or perhaps is) thought to be ethically
acceptable in some culture or society (perhaps even your own) that you believe is morally wrong.
Now try to articulate the reasons why it’s wrong. You have just started doing moral philosophy!
1.2 Reflection
Now, one might wonder how we can discover that we ourselves or members of our community have
been following customs that are morally wrong, if we are located in societies and communities that
follow these customs. This is where moral theory, conceptual analysis, and argumentation come in. We
can use moral theories to assess the norms, conventions, and practices of our own communities. Even
so, it is difficult to understand how things might be different from within our own culture. This is where
outside perspectives are particularly valuable.
As a number of philosophers who study the theory of knowledge have argued, the critical eye of
people with very different beliefs, norms, and values to our own can be extremely useful for assessing
the claims we endorse and the things we do. The idea is that if a claim or practice can withstand
criticism from a wide variety of different perspectives with very different assumptions then it must be
pretty good, or at least it is likely to be morally acceptable. It is rather like using various experiments
to test the same hypothesis. If your hypothesis is confirmed using a wide array of very different
experimental designs, then your scientific investigations have given you good reason for thinking it is
likely right.
Notice, that this process does not give us grounds for dogmatically claiming that the matter is
permanently decided in either science or ethics. Moreover, our assessments must be done in good
faith. If we value scientific knowledge we should welcome having multiple rigorous tests of our favored
theories. In the same way, if we want to do the right thing, we should be open to criticism from a wide
variety of different people whose views are very different from our own. Of course, others may or may
not be right in their criticisms. Either way, being able to understand and assess them will give us
insight into the relevant ethical issues and better justification for our own ethical decisions.
Unfortunately, we often don’t have access to a variety of people from many different backgrounds to
give us feedback on our ideas and activities. Even if we do, these folks may have better things to do
than help us with our moral dilemmas. Fortunately, we do have access to published work by thinkers
from around the globe and we can draw on this and our own imaginations to guess what those who
disagree with us might say. This kind of dialogic reasoning is characteristic of philosophical work (as
we will seein Chapter 2). If you want to do the right thing then sincerely considering arguments both
for and against the various possible actions that are open to you is one of the best ways of ensuring
that you do.
perspective said or did something that prompted you to reconsider one of your own cherished ethical
or political commitments?
Now, it might reasonably be asked whether such a process of rational reflection, judgement, and
action will always provide the right answer. Philosophers have disagreed on this point. However, the
very fact of their disagreement suggests that, for practical purposes, all philosophers are going to have
to admit that seemingly rational people do in fact disagree about moral issues and sometimes these
disagreements are intractable.
1.3 Disagreement
It is worth articulating the different ways in which philosophers disagree, as this will help us better
analyze and assess competing theories. Sometimes philosophers disagree about the facts. For
instance, two philosophers might share the same basic normative views but disagree about relevant
features of the world. Suppose two philosophers agree that what matters morally is to make people as
happy as possible. However, one believes that, psychologically speaking, what actually makes people
happy is ensuring their safety, while the other believes that happiness depends on maximizing
people’s freedom. Both agree that happiness is a particular emotional state, but they disagree about
the facts regarding what causes it. Notice that if they both really care about doing the right thing, they
are probably going to want to look at some empirical work here. For example, they might examine
research in social psychology to see what really does make people happy.
Another possibility is that the philosophers disagree about what happiness means or, alternatively,
what type of happiness is morally relevant. One philosopher might think that true happiness is an
emotional state that is experienced moment to moment while the other might think that true
happiness depends on achievement and overcoming various struggles and obstacles over a lifetime.
These philosophers are effectively disagreeing about what a certain concept means. Scientific
investigations are unlikely to be helpful. In order for science to discover what causes happiness, first it
must be determined what we’re talking about when we refer to happiness. This brings us back into the
realm of philosophy.
Notice that this question about what a moral concept means is intimately related to who counts. Here,
again, our philosophers might disagree. After all, many nonhuman animals appear to experience
emotional states like happiness, in which case the first philosopher should, presumably, include these
animals in their moral decision-making. The second philosopher might not agree. They might argue
that other animals can’t formulate the kinds of life projects that are required for happiness, and claim
2
that only humans (or perhaps most humans and a handful of other species) count. While the sciences
might be invaluable for identifying which animals (and humans) have the capacity to be happy, they
can only do this work after philosophers have defined it.
Finally, we might simply accept different moral theories and values or rank them differently in
importance. One philosopher might think that maximizing happiness is the single most important
moral goal while another thinks it is irrelevant because freedom is the only thing that matters morally,
whether it makes people happy or not. Here again, there is philosophical work to be done.
Notice that if we agree about the facts, the meaning of moral concepts, who counts, and the
applicable moral theory or values, we should agree about the right course of action. If we are
reasoning carefully and disagree about the right course of action it is almost certainly because we
disagree about the relevant facts, the meaning of moral concepts, who counts, or the relevant moral
theories or values (or their relative importance).
Importantly, whatever we decide to do, we are morally responsible for that decision and its outcome—
good or bad. We should expect to be held accountable for our actions. Happily, if we have carefully
considered our options, listened to and learned from those who disagree, and looked at the situation
through each ethical lens and from all relevant perspectives, we can expect to have a robust and
convincing justification for our actions.
In applied contexts, there is the possibility that even if we disagree about the facts, the interpretation
of moral concepts, who counts, and the correct normative theories, we may nonetheless agree about
what the right action is in a given situation. This gives us another reason for not just choosing one lens
or theory over the others but instead taking a more pluralist approach. If we can show that the same
action is required by a broad set of very different moral views, then this becomes very powerful
evidence that the action is morally required. So, even if you are inclined to think that one of the
approaches discussed below is right to the exclusion of the others, you may be able to provide far
more compelling arguments if you notice when they agree.
An interactive H5P element has been excluded from this version of the text. You can view it online
here:
https:/caul-cbua.pressbooks.pub/aep/?p=391#h5p-8
Notes
1. Tacitus, The Germany and the Agricola of Tacitus (Project Gutenberg, 2013), Agricola para. 42,
https://www.gutenberg.org/files/ 7524/7524-h/7524-h.htm.
2. Notice that restrictive views about who counts morally may lead us to exclude some nonfetal humans too,
such as the very young and some of the very old, so such restrictive approaches to moral status may turn
out to have unacceptable implications.
Moral Response and Reflection | 7
At this point, we’re inclined to direct you to a classic skit by the British comedy troupe, Monty Python.
One or more interactive elements has been excluded from this version of the text. You can view them online here:
https:/caul-cbua.pressbooks.pub/aep/?p=395#oembed-1
1
In their “Argument Sketch,” they discuss and exemplify both what philosophical argument is and what
it isn’t. Part of what they play with is the fact that we use the word “argument” in a number of different
ways. We can see three different senses of the term in the skit, only two of which are philosophical.
The first sense is when people who disagree about something (or think they disagree) yell at each
other. This is not the philosophical sense of argument. It is correlated with it, however, as sometimes
people who are engaged in such yelling matches at least started out with each taking up a contrary
position and trying to convince the other.
This second sense of argument is basically a synonym for debate. Two or more parties—interlocutors—
take up contrary positions on a point and try to convince the other(s). This is “an intellectual process,”
as one of the characters points out, and a practice that is crucial to philosophy. It is not mere
contradiction because reasons are given in an effort to change the other’s mind.
Even when there is no other person around, philosophers will often think of and defend their own
positions with a type of internal debate. They themselves take up a contrary position to their own view,
make as good a case as possible for it, only to defeat it later. This is one of the reasons that you need
to pay careful attention when reading philosophy. It is all too easy to mistake a passage where an
author is explaining an objection to their position as an account of their own view (in philosophy, we
call this process dialogic reasoning). If you are not used to reading philosophy this can seem bizarre.
Why would someone argue against their own view just to show that the argument they have given
does not work? The idea is, if you can correctly articulate your opponent’s reasons for disagreeing with
you and then show that either there is a flaw in this reasoning or that it is insufficient to dislodge your
claim, then you effectively undermine their position and support your own. The point of debate is to
convince other people of your own view or discover errors in your thinking and revise it. So,
considering objections—thinking about why others might disagree with you and what you can say in
response—is crucial for philosophical argumentation.
This brings us to our third sense of argument. These are the parts of the argument in the debate
sense. Here, Monty Python offers a definition that you might find in any introductory logic book: “An
argument is a connected series of statements to establish a definite proposition.” Often philosophers
will call the “connected series of statements,” “premises,” though this is really just a fancy word for
reasons. The “definite proposition”, or “conclusion”, is established by the premises. (This use of the
term “conclusion” can sometimes be a bit confusing as the same word is also used to refer to the final
section of an essay.) Philosophers often use this language of premises and conclusion, but it is
important not to let these technical terms intimidate you. A conclusion is just a controversial statement
that you are trying to convince others to believe, and the premises are the reasons that you give for
holding it. Sometimes it can be tricky to determine what the conclusion is, but often authors will use
verbal signs, predicating their conclusion with “thus,” “therefore,” “hence,” or a phrase like “it follows
that.” (You can find a list of these kinds of verbal signs on the Appendix A, Tips for Reading Philosophy
Actively.) It is important to note that philosophers will often have multiple conclusions and arguments
in their paper, though typically these all serve to defend one central conclusion.
Another useful point we can find in Monty Python’s “Argument Sketch” is the distinction between an
argument and a good argument. Of course, in the sketch, when one of the characters says, “I came
here for a good argument,” he means something like he was expecting a debate that was interesting
and fun. Because we are more concerned with the third type of argument, we are going to think about
good arguments as ones that are successful. That is, a good argument is one that would convince any
rational person of the truth (or reasonableness) of the conclusion. The premises of a good argument
really do establish the conclusion (or at least show it to be more reasonable than alternatives).
The first condition—A, the truth or acceptability of the premises—is pretty easy to understand. If the
reasons that someone gives for believing a particular conclusion are false (or otherwise unacceptable),
then you don’t have any reason for accepting that conclusion. Ideally, we would be certain that each
premise is true, but certain truth is a difficult standard to maintain. After all, even very well verified
and widely accepted claims in the sciences—for instance, that
cigarette smoking causes cancer—might just be false. This is not a flaw of science; it is a side effect of
the empirical and statistical methods that are characteristic of scientific research. It is possible, albeit
extraordinarily unlikely, that every study of the issue had some unrecognized fatal flaw and that the
well-evidenced correlation between cigarette smoking and cancer is the result of some other factor or
factors, that are correlated with cigarette smoking, and that actually cause cancer. Nonetheless, it is
reasonable to accept the claim that cigarette smoking causes cancer even if we don’t have absolute
certainty. Indeed, if we are not willing to accept claims like this, we will find it difficult to make any
ethical decisions at all (or, indeed, any other kind of decision).
At the same time, we do want to avoid uncritically accepting everything that someone says to defend
their position. Thus, it is important to reflect on why each premise is acceptable and to sincerely
question whether, in fact, more information is needed before a rational evaluation of a given premise
can be made.
The second condition—R, the relevance of the premises—is a bit trickier. It may seem obvious that for
a premise to establish a conclusion it must be relevant, but, in fact, people quite often will use
irrelevant facts to try to convince others to think or do something. There are many different ways of
distracting people from carefully thinking through the matter at hand and irrelevant premises tend to
do this. One of our favorite examples is a false equivalency that is used in an antacid commercial
3
from the 1990s. In this commercial, a man first dips a rose into a glass of hydrochloric acid, visibly
damaging the rose. He then dips a second rose into the same acid, but only after first coating it in the
antacid product being advertised. The second rose emerges seemingly unharmed.
One or more interactive elements has been excluded from this version of the text. You can view them online here:
https:/caul-cbua.pressbooks.pub/aep/?p=395#oembed-2
We can reconstruct the argument offered by the commercial as something like this: Premise 1. If we
dip a rose in acid the acid will eat away the rose.
Premise 2. But if we coat the rose in this particular brand of antacid before dipping it in acid the acid
will not eat away the rose.
Conclusion. Therefore, if you have acid indigestion you should “coat” your stomach with this
particular brand of antacid.
Of course, a rose is nothing like the human stomach. On the face of it, the fact that the rose is
protected by a particular brand of antacid is just not relevant to whether it will help with acid
indigestion. Minimally, we need some additional reason to think that it is acceptable to “think of this
rose as your stomach,” as the commercial suggests. That means that for this argument to work you
would need to add some reason (or reasons) for thinking that roses and human stomachs are
relevantly similar. As it stands, the premises aren’t relevant to the conclusion.
The third condition—G, whether the premises provide good grounds for accepting the conclusion—is
the most general as it includes the other two conditions. After all, premises that aren’t true (or at least
reasonable) and premises that aren’t relevant cannot provide good grounds for accepting a conclusion.
Indeed, you may think that if all the premises are true (or at least acceptable) and relevant then they
must provide good grounds for accepting the conclusion. This, however, is not the case.
Consider a friend who urges you to try taking a herbal remedy the next time you get a cold. The
reason they give is that 10 | Reason and Argument
they have started taking it when they get a cold and it works for them. It may well be true that it works
for them and it’s certainly relevant to the broader question of whether one should try the remedy
oneself, but is it good enough grounds for doing so?
You might ask your friend how they came across this remedy. In effect, what you would be doing here
is seeing if there are better reasons for taking the remedy. Suppose they say some dude at the
farmer’s market was selling it and swore by it as the best cold remedy he had ever tried. Do you have
better grounds for thinking it will work? On one hand, you now know that there are at least two people
who say it works, but on the other hand, you know that one of them has a vested interest as he is
selling it. Suppose, instead, that your friend cites a meta-analysis of 20 randomized control trials
showing the efficacy of the remedy for various cold viruses and across various population groups. Now,
clearly, that’s much better grounds for thinking that the remedy will work for you than simply the
testimony of either your friend or the dude at the farmer’s market.
When it comes to some applied ethics contexts, we will find that what constitutes good enough
grounds depends on the seriousness of a situation and the risks involved should we make a bad
choice. With the question of whether you should take the herbal remedy at your friend’s urging, the
stakes are pretty low. After all, they’re still alive, so you can infer that it’s likely not poisonous. The
worst thing that is likely to happen is that it just won’t make any difference to your cold symptoms,
and you’ll be out a few dollars. But suppose instead that you are a health officer in charge of
coordinating a response to a global pandemic in your local area and the president of the United States
claims that a particular drug (in which they have a financial interest) has worked for them and can
significantly reduce the mortality of those infected with the illness. Does this constitute good grounds
for spending a considerable portion of your region’s budget on this remedy? Here the stakes are
higher. The illness is considerably more dangerous; the decision affects many more people than just
you; you are in a position of public trust; it’s your job to make these kinds of decisions well; millions of
dollars will be diverted from other priorities and treatments for the pandemic should you buy the drug;
and so on. When the stakes are high it is reasonable to expect people to have very good grounds for
their conclusions and their decisions.
There are, however, a couple of things that we can do to make our debates more productive. First,
when criticizing someone else’s position, we should try to find all the points of agreement. This process
will help to narrow down exactly where the disagreement lies and focus the discussion there. Many
debates make little progress because people talk past each other; conflicts cannot be resolved
because the interlocutors are not arguing about the same thing! (Debates about abortion frequently
exemplify this problem as one side focuses on the rights of the pregnant person while the other
4
focuses on the moral status of the fetus.
5
Second, instead of trying to win, we can engage in argument repair. This is where you help your
interlocutor make the best case possible for their position. Argument repair can be achieved by making
assumptions explicit, clarifying
ambiguous terms, adding missing premises, or offering subarguments in defence of dubious premises.
It is important not to misrepresent the argument when we are trying to repair it, which is easy to do if
6
we disagree. Moreover, for argument repair to be successful, our interlocutor must be open to
revising their position and we should allow them an opportunity to do so. Amendments are only
justifiable if they make the argument stronger. Added premises must be relevant (it’s remarkably easy
to get carried away and add irrelevant premises) and provide good grounds for accepting the
conclusion and changes should be acceptable to all parties in the debate.
In our everyday conversations, we typically don’t state all of the premises needed to give a complete
defence of our arguments because we share common background knowledge and assumptions with
our interlocutors. However, in more fraught contexts (such as in ethical disputes), there are often
unstated premises that aren’t shared by all parties or the argument hinges on an important term that
each defines in a subtly different way. (We touched on this in the discussion of disagreement in section
1.3.) In these cases, engaging in argument repair can make the debate more productive for everyone
by redirecting debate away from an adversarial process to a more collaborative one that is aimed at
mutual understanding and a resolution to the dispute that everyone can accept.
Engaging in respectful, good faith dialogue can alert us to features of the moral landscape we may
have overlooked or show us how our own reasoning is lacking. We can then use these insights to
revise our own views and make the arguments defending them stronger.
A religious practitioner might object that they believe that their religion does in fact offer the best
guidance for right action, which is why other people should follow their ethical prescriptions. Of course,
this is possible. After all, there are many different religious traditions and considerable diveristy within
each of them and at least some, if not most, of them will have important insights into moral life. The
problem is that there is no obvious way to determine which religion is the right one and thus which
specific ethical rules one should follow.
7
Mozi , a Chinese philosopher who lived over 2000 years ago (ca. 480–392? BCE), made a similar
point. He argued that it is important that people not simply adopt conventional views and practices—
the kinds of practices that people often unthinkingly follow because they were taught them as children
—as these practices might not be morally right. Moreover, because people come from different
cultures with different practices, simply following these conventions, particularly in contexts of ethical
conflict, inevitably leads to social discord and, in some cases, war. Mozi recognized that accepting a
kind of cultural relativism where right and wrong are simply determined by cultural convention isn’t
a
12 | Reason and Argument
viable option when people from many different cultures have to live together. Instead, Mozi argued for
8
objective moral standards that everyone should follow.
While Mozi was not an advocate for the kind of general freedom that characterizes contemporary
democratic societies, his insights about needing shared ethical standards are still pertinent. This is why
applied ethics typically deals with public reason. The idea of public reason is that the ethical rules
must be acceptable or at least justifiable to everyone who is expected to live by them. This means that
reasons given in applied ethics contexts should rest on ideas and theories that are not parochial. As we
will see when we look at the ethical lenses below, values like rationality, happiness, and freedom have
the sort of broad appeal that is characteristic of public reason.
Though not strictly necessary, there is a certain sense of fairness implicit in the idea of public reason.
All things being equal, we are all expected to follow the same rules. If there is to be differential
treatment, there must be a good reason for it. Indeed, this is really a point about rationality as well as
fairness. Like should be treated alike. In ethical contexts, this ideal is called formal justice. It is a part
of a broader rational norm of consistency.
To summarize, ethics requires us to do more than simply follow our knee-jerk reactions, our emotional
responses, or conventional norms when deciding what to do. It is not that they are irrelevant. They can
alert us to moral issues and important aspects of a tricky moral dilemma. However, they can also
mislead. Moral reasoning requires not only an assessment of the moral issues with a sensitivity to
competing analyses but that we have good reasons for what we ultimately decide. We need to commit
to shared standards of rational argumentation and constructive debate if we are to defend our
judgements and hold each other accountable for our actions. The ethical lenses discussed below help
to provide the normative content of these reasons.
An interactive H5P element has been excluded from this version of the text. You can view it online
here:
https:/caul-cbua.pressbooks.pub/aep/?p=395#h5p-11
Notes
1. Monty Python, “Argument Clinic—Monty Python—The Secret Policeman’s Balls,” YouTube, January 21,
2009, video,
https://www.youtube.com/watch?v=DkQhK8O9Jik.
2. Trudy Govier, A Practical Study of Argument, Enhanced Edition (Boston: Wadsworth, 2014), 87-103.
3. Retrobox, “Pepto Bismol Rose (1992),” YouTube, November 17, 2012, video,
https://www.youtube.com/watch?v=VGRtg6W6Kug.
4. Shannon Dea has an interesting chapter that takes an argument repair approach to abortion debates by
suggesting harm reduction as a common value shared by pro-life and pro-choice advocates and then
seeing what follows ("A Harm Reduction Approach to Abortion," in Without Apology: Writings on Abortion in
Canada, ed. Shannon Stettner [Edmonton: Athabasca University Press, 2016], 317-32.
https://uwspace.uwaterloo.ca/bitstream/handle/10012/11165/ Stettner_2016-Without_Apology.pdf?
sequence=1&isAllowed=y#page=327).
5. See Catherine Hundleby for more discussion on argument repair: https://chundleby.com/2015/01/16/what-
is-argument-repair. Reason and Argument | 13
6. This is also fallacious (illogical) reasoning known as attacking a straw figure. If we misrepresent someone's
account so that it is easier to refute, we are not working productively or collaboratively to repair the
argument. See the School of Thought's "thou shalt not commit logical fallacies" for more logical fallacies to
avoid in your reasoning and writing (https://yourlogicalfallacyis.com).
7. JeeLoo Liu, An Introduction to Chinese Philosophy: From Ancient Philosophy to Chinese Buddhism (Oxford:
Blackwell, 2006), 108.
8. Chris Fraser, "Mohism," in The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), ed. Edward N.
Zalta, §3, last modified
PART II
ETHICAL LENSES
As mentioned above, there are many different moral theories. As you confront particular moral
problems or study applied ethics subdisciplines you will find that digging deeper into these theories is
a crucial part of developing your applied ethics toolkit. Nonetheless, at the introductory level we can
identify four fundamentally different approaches to moral reasoning that cover the essential ideas of
many of these theories:
1. Focus on consequences;
2. Focus on action (and duties);
3. Focus on character (and virtues);
4. Focus on relations.
These are ethical orientations that are woven throughout various global ethical theories and traditions.
As noted above, we are going to think of them as lenses that can be brought to the ethical question,
what should I (or we) do? A focus on consequences prompts one to evaluate the outcomes of our
possible actions, directing us to consider who will be affected in positive or negative ways. A focus on
action (and duty) prompts one to think about the actions themselves, what motivates them, and what
makes a particular action right or wrong, optional or required. A focus on character (and virtues)
presents us with the challenge of figuring out what kind of person we want to be, what constitutes a
good life, and the virtues and activities that are characteristic of good people. A focus on relations
affirms the importance of relationships of various different kinds and looks at how they inform and
constrain what one can and should do. In the next four chapters, we will look at each of these
approaches in more detail.
Even as the four lenses offer a comprehensive set of approaches to thinking through ethical problems
and issues, some moral concepts defy neat inclusion under one or another lens. We will discuss two
important and influential examples—ahimsa and rights—in Part III (Chapter 8 and Chapter 9,
respectively). As you read about the different lenses (and, indeed, the concepts of ahimsa and rights)
you will notice that some of the theories offer different views about who or what should be considered
when we make ethical decisions. This is captured by the idea of moral status (also sometimes called
moral standing or moral considerability). Some theorists treat moral status as a matter of degree,
maintaining that some beings have full moral status and their interests should count more in our
ethical decision- making, while others still count but to a lesser degree. Other theorists treat moral
status as an all-or-nothing kind of issue. What grounds moral status (as we have already seen in
section 1.3) is contentious and we will return to it below as we survey the lenses and develop a sense
of the ways in which questions about moral status arise.
Ethical Lenses | 15