Cópia de Hitlin-Moral Selves, Evil Selves-2008
Cópia de Hitlin-Moral Selves, Evil Selves-2008
Steven Hitlin
Contents
Acknowledgments ix
Introduction 3
1 Building a Social Psychology of Conscience 11
2 Moving Parts 35
3 Evolution, Society, and Conscience:
Social Influences on Morality 53
4 Processes of Conscience: How the Moral Mind Works 75
5 How Situations Subvert Conscience 93
6 Us and Them: Shifting Moral Provinces 113
7 Conscience in Individual Functioning: Self-Deception and
Moral Self-Biases 129
8 Conscience and Moral Horizons 147
9 The Moral Ambiguity of Personhood 167
10 The Possibility of Morality 185
Notes 203
Bibliography 227
Index 265
Chapter 3
Evolution, Society,
and Conscience:
Social Influences
on Morality
behaviors (such as desire for food and sex), and aims to achieve homeostasis.
This system evolved early in human history and is more primitive than the
problem-solving and planning aspects of our frontal lobes.
The frontal lobes are the “executive centers” of the brain and decide
how to respond to the signals sent by the emotional brain. The prefrontal
cortex is the central-most aspect of these lobes, connected with just about
every part of the brain, including the emotional centers. It guides inten-
tional acts, planning, and other internal thoughts and behaviors. The “self”
is thought to reside within the prefrontal cortex. The rest of the frontal
lobes are responsible for motor control and movement. The final relevant
parts of the brain are mirror neurons, a network that allows us to mirror
behavior based on others’ actions. This network observes social interaction,
develops patterned responses, and allows us to feel what we perceive others
are feeling:30
[M]irror neurons “learn” from observation of the patterns of others how to
behave morally in social situations. When a challenge occurs to moral deci-
sion making, the amygdala revs up inducing a range of emotions from fear and
anger to disgust. The inhibitory mechanism—the hippocampus and hypo-
thalamus as well as the prefrontal cortex—steps in to ameliorate the response.
This allows for clear-headed deliberation and a reasoned moral decision ...
Sometimes the amygdalar response, if very intense, may overpower the
inhibitory network. When this occurs the decision maker is no longer able
to be objective, which may lead to an immoral decision and behavior.31
however, it is likely that some genetic selection for bonding in groups was
adaptive.45
There is neurological evidence suggesting that this hard-wired concern
with other humans activates different parts of the brain depending on the
nature of the moral dilemma. Emotional conflict is, itself, often indicative
of the brain encountering a moral issue.46 However, moral issues are pro-
cessed differentially if they engage “personal” judgments versus “impersonal”
ones.47 When reasoning about people we know, or we are directly implicated
in harming someone, emotional aspects of the brain light up. When we are
reasoning about an abstract, impersonal ethical issue, neuroimaging shows
only the frontal lobes being activated. The fact that our faster-processing
systems swing into action for dilemmas that implicate a personal issue sug-
gests that we have evolved to be quickly concerned about processing such
information. Actions that affect a loved one, for example, operate first
through our first-order processing. At the basic level, concern for those we
care about quickly affects our perception and biological responses.
Consider a difficult moral dilemma: Should you kill a crying baby if you
were in a group hiding from the Nazis? The baby’s cries might give away
your location and condemn the group to be found. Simple moral problems
rely on tried-and-tested solutions, but difficult moral issues can activate con-
flicting emotional versus cognitive regions.48 In this example, our emotional,
personal brain kicks in with disgust at the prospect of killing a baby, while
our second-order, conscious mind reasons through the costs and benefits.
Thus, we do not have one moral center in the brain; we have competing
systems reacting to moral stimuli.
To this point, we have gone through a summary of the biological sub-
strates for the mechanisms of the moral self. In the next chapter, we will go
into more detail about how the moral mind operates during social interac-
tion from a social, not a neurological, level of analysis. Suffice it to say, by
building a model of conscience on the dual-processing system, we are safely
within the bounds of biological and neurological research. Ultimately, the
multiplicity of brain systems that are implicated in any moral judgment or
decision suggests that simple models of moral reasoning do not reflect real
people in real situations. And much of what we think we will do—in moral
domains as in others—tends to get subverted by situational pressures such
that our behaviors diverge from our ideal.
Humans have evolved with the capacity for both making instantaneous
moral judgments and reflecting on them, hot and cold processes that com-
prise the self’s moral dimension. This is not the self’s only dimension, but it
is experienced as core to one’s sense of oneself as a person of worth within
various groups and society as a whole. At the individual level, we are pre-
disposed to internalize boundaries between moral and immoral, between
acceptable actions and Bright Lines of forbidden or undesired conduct.
The fact that we are at root social beings has influenced the way evolu-
tion has shaped a moral apparatus that exists at multiple levels in the brain.
Not only has sociality influenced how we are wired, today, but our capacity
for language and interaction is at the root of how individuals, not just the
species as a whole, develop conscience. As I discuss a relatively brief overview
of the evidence for this position, recall that much of the published research
on morality involves conventional notions of altruism and aggression, mean-
ing that what we know about moral development occurs within a range of
actions and orientations that is narrower than the vision of conscience I am
setting forth in this book. Eventually, the empirical study of conscience
should encompass a broader range of moral concerns. In addition, much
of the developmental psychological research is done with Western samples.
While there is evidence that children develop moral capacities as part of
being human, the processes outlined here have been largely studied within
Western cultures.
Applying the metaphors of this book to biological analyses of the brain’s
development, we can hypothesize the order of moral development moving
from early childhood Bright Lines to the later development of Bright Lights.
Young children do not possess a capacity for conceptualizing themselves
far into the future, so their goals tend to be more immediate. Children’s
social learning occurs through actual and envisioned rewards and benefits
and grows more and more abstract as the child’s linguistic capacity increases.
While there may be a few biologically hard-wired tripwires located in our
automatic processing system (fear of snakes, perhaps), more advanced moral
issues of sharing, fairness, and delaying gratification are only recognizable
through language and the use of symbols and within an extended time
horizon. These symbols are perceived at both first- and second-order levels,
and as they age, children develop their ability to reason about moral con-
cerns. I will talk about emotions more in the next chapter, but there is evi-
dence for a subset of universally developed emotions within our species.
Children’s development involves the maturing of this capacity and a learn-
ing of their culture’s content that will trigger the appropriate moral emo-
tion at the appropriate time. What we mean by socialization is precisely
how children (or newcomers to a culture) learn the culturally appropriate
triggers for universal feelings such as fear, disgust, anger, and happiness.
Society and Conscience 61
parents hold are more influential on their beliefs than parents’ actual values.66
Perception is reality.
Recall that even though the content of these rules can change across
cultures, what they share at the first stage is the sense that the Bright Line is
a universal prohibition, not simply a social convention.67 Conventions are
experienced as particular to one’s culture, such as what side of the road to
drive on. Bright Lines, on the other hand, wall off actions that are seen as
morally deficient regardless of who performs them. The interaction be-
tween culture and moral development is complex,68 but appears to follow
the general two-step order of first internalizing external moral rules and then
developing a personal motivation to adhere to them in the form of person-
ally important values. Children from America and India are able to distin-
guish between cultural definitions of right and wrong by age five, suggesting
this is a universal developmental timetable.69 The motivation to follow these
definitions may be culturally shaped. Chinese children, for example, dem-
onstrate greater consistency between moral knowledge (step 1) and moral
motivation (step 2) than Icelandic children due to the greater societal con-
cern with altruism that motivates the design of school curriculums in
China.70
The most influential theory of moral development in the last forty years
was developed by Lawrence Kohlberg,71 who was famously influenced by
Piaget’s work on the social nature of children’s development72 and less
famously influenced by Dewey’s ideas on the development of the self.73
Kohlberg’s work posits six (later five) stages of possible moral development
for children, into adulthood. At the earliest levels, children act morally,
defined in the prosocial sense described in the last chapter, because they are
instructed to by people with authority. Only later on do some adults appeal
to higher ethical systems to undergird their moral beliefs. At first, for exam-
ple, we do not steal our sibling’s toys because parents and older family mem-
bers tell us not to and threaten us with punishment. Later on, we refrain
because we want to be “good” and feel rewarded if we live up to family and
community standards. At the higher levels, Kohlberg held, we obey the
social order through a deep understanding of abstract principles (individual
rights) and feel that violating those principles is a moral violation. Moral
breaches are, for those at these higher levels, more fundamental than simply
acting well to forestall others’ disapproval. Very few adults (only 1–2 percent
of people worldwide) reach the highest stages on Kohlberg’s scales.
At first blush, Kohlberg’s theory seems to add detail to the two-step
process discussed above. However, it has been a lightning rod for criticism.
For one thing, the word problems that respondents are asked to respond to
in order to gauge their supposed level of moral development reward people
who reason like moral philosophers, something most research subjects would
Society and Conscience 63
have trouble with. Perhaps Kohlberg has developed a theory of moral jus-
tification based on the ability to articulate moral feelings, without really
delving into the feelings and intuitions themselves.74 Children demonstrate
moral reasoning that is, in practice, much more advanced than the Kohlberg-
stages would predict and can vary their reasoning based on context or prior-
ities.75 Kids are not less logical than adults; they are simply more susceptible
to letting emotional stimuli override their logical centers and are known to
demonstrate less impulse control.76 Kohlberg’s rational approach privileges
justice over other virtues as if it is the only principle that organizes social
life.77 Morality involves more than just issues of harm and justice.78
More problematically, the Kohlberg scheme has not received extensive
empirical support,79 especially when examined across cultures. For example,
Shweder and colleagues found elements of Kohlberg’s rare and elite reason-
ing (postconventional) in children and adults in both Brahmin Indian and
“untouchable” Indian populations. Thus, they argued, such reasoning was
not nearly as limited as Kohlberg suggested.80 They found a variety of prin-
ciples underlying perceived moral obligations, not simply one scheme based
on Western notions of rationality and justice. Shweder posits multiple
“natural laws,” arguing that morality extends beyond conventional discus-
sions of justice, harm, and rights to include issues of duty, hierarchy, and
interdependency.81 Children are able to differentiate between moral con-
cerns and social conventions as early as forty-two months, much earlier
than suggested in Kohlberg’s scheme.82 Perhaps most problematically,
individuals who engage in the most prosocial action do not score highly
on Kohlberg’s scheme83 or are no different than nonprosocial individuals
in the stage of moral development they have supposedly reached.84 People
who score highly on the Kohlberg scale do not act any more ethically than
those who score more poorly.
Finally, no discussion of Kohlberg’s scheme is complete without at least
mentioning the famous gender-based broadside leveled by Carol Gilligan.85
Briefly, Gilligan suggested that Kohlberg’s concern with justice-reasoning
precluded a particularly female way of moral development, namely, a con-
cern with care and empathy. She argued that women do not abstractly rea-
son about their moral principles and that women scored lower on average
than men on the Kohlberg scheme not because of any deficiency of women
but because of deficiencies in the theory.
Gilligan’s care-ethic has not fared much better than Kohlberg’s scheme
when tested empirically. There is very limited empirical support for the
notion of gender differences,86 and even if there are two sorts of ethics for
organizing moral outlooks (which, interestingly enough, more often than
not arrive at the same moral conclusion), there is little reason to believe they
are associated with gender.87 Defining morality as either justice-oriented
64 Moral Selves, Evil Selves
system.”105 Examining social structure means taking into account the pat-
terns of human interaction that are supported by various institutions, such
as the family, school, or religion. Social structures can be very abstract, such
as the labor market, or more local and formalized, such as a volunteer group.
People within social structures have circumscribed options for interacting
with others. A typical person cannot call up the president and make an
appointment, nor take a job without being hired. Social structures shape
our social statuses and our expectations for our behavior as well as that of
others.106 We develop our senses of right and wrong within the various
“moral communities” in which we interact.107
While social structure refers to the broad pattern of relationships, social
psychologists use the term “culture” to refer to “a set of cognitive and eval-
uative beliefs—beliefs about what is or what ought to be—that are shared
by the members of a social system and transmitted to new members.”108
Culture, in this usage, deals more with beliefs than with interaction patterns,
what I translate into social psychology as a concern with the “definition of
the situation” that is at the root of the worldviews that are so important for
understanding conscience.109
These two notions, structure and culture, cohere in the sociological
tradition termed “social structure and personality” (SSP). The SSP tradition
attempts to demonstrate how social structures influence individual psy-
chological functioning. Nonsociologists know that some people earn more
money than others and likely think that rich people see the world differently
than working-class people. The SSP tradition explores how precisely this
works and what contributes to the ways people of different social classes see
the world. It is too simple to claim that money causes these differences. If that
were the case, we could simply give a person enough money to change social
classes, and they would seamlessly fit into their new social position. A lot of
Hollywood comedies explore this premise, reflecting our notion that money,
itself, is not what causes people to believe things. Rather, people develop
within groups and families (structures) that have belief systems (cultures).
Melvin Kohn’s work is paradigmatic of SSP research. Along with col-
leagues, he has long focused on tracing how people’s values related to
work are shaped. Parents’ values toward work derive from the complexity of
their jobs, for both genders and across cultures, and lead to children choos-
ing jobs that often replicate their parents’ location in society.110 Specifically,
one’s occupational conditions shape this process through (1) the closeness of
supervision, (2) routinization of work, and (3) the substantive complexity
of the work. These three conditions contribute to a person’s level of self-
direction: how much they value autonomous action as opposed to conform-
ing to the direction of others. This body of literature demonstrates that
those who hold advantaged positions in the occupational social structure are
Society and Conscience 69
more likely to have jobs with greater levels of self-direction than those in
less advantaged positions; people with jobs in the former category inter-
nalize this value and then teach it to their children.111 In turn, those chil-
dren grow up and look for jobs that fit their own values, meaning that
people choose to take the sorts of jobs that mirror what their parents valued
and what they were taught. Thus, we find intergenerational consistency in
social class partly due to the beliefs that children adopt based on their par-
ents’ jobs.112 Thus, differences in values along class lines become stable and
consistent and reproduce inequalities over time.113
Kohn suggests that social psychology, as a field, has not accorded a
central enough place to structure,114 but too much focus on structure is “out
of sync” with more recent, dynamic conceptions of human agents.115 Both
visions, one privileging patterns in social structural influences and the other
focusing on individual capacities to shape their own lives, are operative in
developing a social psychology of conscience. Certainly, our socialization
shapes who we are and what we choose. We are not mindlessly repeating
what our parents have told us, nor are we fully self-constructed. We are
shaped by the structures and cultures that we have developed within—
that shape our values and frame the choices and evaluations we make about
the world. We have free will, but we often choose to do what we are com-
fortable with, and that first-order feeling of comfort comes from our
upbringing and our experiences within particular structures and cultures.
The social structure and personality approach, while kept alive by
scholars in the tradition, has been subsumed by a broader field known as
life course studies.116 Life course studies encompass the study of a variety
of domains, ranging from how people age to issues of health, crime, and
family.117 Life course theory suggests that “individuals construct their own
life course through the choices and actions they take within the opportu-
nities and constraints of history and social circumstance.”118 We make
choices, but they are bounded ones.119 Thus, life course studies add notions
of time, history, and human agency to the social structure and personality
tradition but retain many of the insights linking individual psychology to
larger social and structures.120 One drawback of both approaches is that
they have tended to parcel out human lives into discrete domains and exten-
sively studied those domains.121 As Margaret Archer puts it, “sociological
specialisation [sic] means that researchers are only interested in one domain
of agential practice, be it employment, the family, education, religion, health
and so forth. Such can never be the case of agents themselves.”122 Real peo-
ple have to juggle a variety of concerns, and from one perspective their lives
might take a continuous course, while from another they might appear
more disjunct If you examine someone’s educational history their lives
might look much more linear than if you looked at their marital history.
70 Moral Selves, Evil Selves
In both cases their decisions were influenced by structures, but their agentic
choices were important as well and should be studied.123 Likely, interac-
tions in each domain affected the other, suggesting the need for
a multidimensional model of the social actor.124
Kohn’s work demonstrates one way that social structures shape our
worldviews—consistent constellations of concepts that shape how we
interpret the world.125 Part of what feels like a “should” in the domain of
employment, then, is shaped by one’s location in the social structure. An
advantage of drawing on life course theory is its focus (like cultural soci-
ology) on the historical contingency of societal patterns, suggesting that
we need to look at more than simply the structure of social positions to
understand why any particular person or group develops their “moral
compasses.”126 We develop systems of attitudes and values—preconscious
ideologies127 that shape our reactions to what we see around us.128
Recall that our brain’s perceptual apparatuses are working much more
quickly than we can consciously follow, and thus our ideologies channel
how our minds interpret the social world and signal the strong reactions
we have to objects that cross a Bright Line or the intuitive attraction we
feel toward a value or goal that comprises a Bright Light. Our ideologies
are the starting point for organizing the world, and we go to great lengths to
justify ideologies that we perceive as natural.130 We may develop an ideol-
ogy to explain the world to us, such as a political ideology, but may do so
in the absence of any concrete information;131 we do not choose our ide-
ologies, but rather develop them subconsciously.132 For example, Alan
Wolfe finds that a core part of American middle-class ideology involves
the lessening importance of self-restraint and the increasing acceptability of
alternative forms of moral behavior, what he terms “morality writ small.”133
There is greater middle-class acceptance, he suggests, in terms of other
people’s right to draw Bright Lines different from one’s own, and this is
core to the ideology that Americans use to understand themselves and their
neighbors. Interestingly, many Americans are highly committed to their
moral principles but accept that people from other cultures might differ
on how they reason about moral issues, suggesting both an adherence to
principle and a tolerance of differing principles.134
More could be said about disentangling the structural and cultural
elements of this process, but such specifications are not my goal. I simply
Society and Conscience 71
for a moral sense is like the capacity for language—an innate capacity that
becomes instantiated within particular social contexts. Humans are hard-
wired to develop moral worldviews, though the specific content about right,
wrong, and what is obligatory or forbidden may differ across culture, nation,
ethnicity, and individual experience. The boundaries appear finite, how-
ever, anchored in species-specific capacities developed within a particular
history of meeting the demands of successful social groups.
These processes have led to a finite range of emotions and possible moral
judgments across societies even as their triggers might be variable. In
Chapter 4, I will discuss a more psychological perspective about how our
moral minds work and fill in some of the details behind Hauser’s notion
of a universal moral capacity. The chapter will sketch out how conscience
processes moral dilemmas and issues, abstracted from concrete situations.
Later on, we will see that regardless of what our mind decides, there are
a lot of factors within situations that preclude us from acting as we think we
should. Let us turn to how the individual mind processes moral informa-
tion, remembering that it has developed within a web of structural and
cultural forces.
Chapter 4
Processes of
Conscience: How the
Moral Mind Works
Our minds have evolved with a moral sense, a capacity for viewing the world
and ourselves in moral terms.2 The capacity for a moral sense is universal,
but it varies substantively—within certain boundaries—across different
cultures, groups, and societies. Human beings draw Bright Lines and are
motivated by Bright Lights, and the most important of these socially
learned moral signposts become internalized and viewed as core to a person’s
sense of self (see Chapter 8). We feel authentic when following our Brightest
Lights and feel morally deficient if we cross a Bright Line.
There is a growing body of psychological literature concerning the
mechanics of moral perception, judgment, and action. Most of these works
utilize the conventional notion of morality as altruism, a narrower definition
of morality than advocated here. Recall the distinction between mental
processes and actual behavior; what goes on in our minds, including emo-
tions, intuitions, and conscious reasoning, is only partially related to what
we actually do, in both moral and nonmoral domains. Does this mean that
what we think and feel is irrelevant? Of course not; even if there was no
relationship between thoughts, feelings, and behavior, the fact that we share
a cultural belief in a tight relationship among these phenomena is culturally
interesting. But what goes on in our heads does relate to what we actually
do; it is just not a straightforward relationship. People who report hold-
ing prosocial values are a bit more likely to engage in prosocial behavior, but
76 Moral Selves, Evil Selves
those values are just a subset of many important factors predicting such
behavior.3 What we know about internal processes is that judgment, emo-
tion, and intuition say something about our intentions to act but do not
have any connection with our actions per se.4 What we actually do is only
partially related to our intentions.
We judge the morality of others’ actions more critically when we believe
they have freely chosen those behaviors, because we assume they have acted
in concert with guiding principles they found legitimate. We do not hold
people to be morally (or legally) culpable for accidental actions and change
our moral evaluations if we discover mitigating circumstances. This means
we develop a theory of mind, a belief in how others process information
and form intentions that underlie actions.5 These theories are shaped by the
structures and cultures discussed in the last chapter. Understanding moral
action requires knowing something about the processes by which individuals
make sense of their social environments.
We do not judge other people, however, in the same manner as we judge
ourselves.6 We make what is known as the fundamental attribution error,
a core social-psychological principle that describes how we overemphasize
situational pressures in justifying our own actions, but overattribute others’
actions to their personal dispositions.7 In other words, we have a bias toward
assuming that others’ behavior is due to their personality or intentions, but
excuse our own actions based on situational factors. When I speed, I am
simply keeping up with traffic (situational factor); when others speed by me,
they are reckless maniacs (personality attribute). This suggests an inherent
bias in the ways we discern our own moral behavior: we focus on the sit-
uational pressures that lead us toward certain choices, but assume that others
just “are” that way. Cultural background plays a part in this; Hindus, for
example, refer more to context when explaining others’ behavior than
Americans.8 This chapter focuses on the psychological mechanisms that
translate structurally influenced cultural messages into personal moral judg-
ments of right and wrong and accordant intentions to act. Understanding
conscience necessitates understanding how moral judgments and reactions
work. These mental processes will later be merged with our ongoing sense
of self, forming the basis for determining who we are and where we stand
on defining moral issues. First, let us look at how the moral mind thinks
and feels.
Moral Judgment
Morality involves judgments about right and wrong,9 about what we feel
we and others should or should not do. We make these evaluations about
a range of social objects, from our personal thoughts and others’ actions
Processes of Conscience 77
to political issues and abstract principles. We feel that we should not covet
our neighbor’s wife or that proper people should be quiet in movie theaters
or that nations should be allowed to start preemptive wars. In principle,
anything is subject to moral judgment; evaluation is ubiquitous and con-
stitutive of human lives. Thinking about moral issues is different from
thinking about other sorts of things given that it ultimately is concerned
with action,10 things that we feel we (or others) should or should not do.
It is, in this sense, a “practical activity,” shaped by a shared language within
a culture that shapes the evaluation of thoughts and actions.11 Morality is
the primary factor we use to judge others,12 though interestingly, we judge
ourselves based on competency, not morality. Perhaps our own moral worth
is taken for granted.
Judging is not a simple process. More accurately, it is wrong to say we
simply judge. Moral judgments come in two kinds: impersonal, which
trigger cognitive analysis and careful reasoning, and personal, which acti-
vate emotional centers in the brain.13 These two processes, perhaps
unsurprisingly, mirror the automatic and controlled distinction in our
dual-processing system. We form opinions about moral issues both delib-
eratively and intuitively, and those opinions do not always agree. When
confronted with a moral issue, we may have a snap judgment and we may
reason through the issue. The crying baby dilemma, presented in the last
chapter, is a prime example of an instance in which the two processes
conflict. An initial intuition is to protect babies, but one can also logically
reason about whether or not saving an entire family is a fair trade-off. The
same process is also at the root of racial stereotyping. Many people who do
not see themselves as prejudiced nonetheless automatically react in racially
biased ways. Some people are more likely than others to override these
intuitive reactions, but an unprejudiced action was not necessarily preceded
by an unprejudiced reaction.
Understanding moral judgment means simultaneously considering how
both elements of our dual-processing system influence moral perception.
This is a crucial step for understanding conscience, a cross-situational,
potentially self-contradictory aspect of individual selves extended in time.
Conscience involves an interplay between automatic and controlled moral
judgments, not simply a focus on one at the exclusion of the other. It employs
two sets of judgments—one deliberative and the other automatic—when
evaluating self and others. Automatic judgments occur fast and they occur
outside of conscious awareness, but what they often do is jump to a moral
conclusion that feels right, and our controlled processes unwittingly fill
in logically sound (enough) Lawyer Logic arguments to support that pre-
ordained conclusion. We are far from the dispassionate thinkers often posited
in ethical philosophy;14 second-order reasoning often “follows along and
78 Moral Selves, Evil Selves
Moral Reasoning
highways are a bit of an annoyance; those who are faster are reckless. My
speed, however I justify it, is the vantage point for proper driving. I judge
others through that lens and morally judge them based on the reasons and
principles I use to justify my speed. We see ourselves as less susceptible to
bias and more likely to reason correctly;32 thus we feel justified in reasoning
from our own presuppositions without challenging them.
As abstract as a discussion of various ultimate moral principles can get,
ultimately moral judgment is a situation-specific process. Moral judgment
involves multidimensional rules and standards and depends largely on how
the facts at hand are interpreted.33 Particular situations call forth partic-
ular facts, knowledge, and, importantly, intuitions that shape our moral
judgments.
Moral Intuitions
oldest parts of the brain.43 Given this history, the prelinguistic need to
first identify and react to danger discussed in Chapter 2, negative reactions
are more strongly hard-wired than positive ones.
The social intuitionist model dovetails with Antonio Damasio’s previously
discussed somatic marker hypothesis, which posits that emotional expe-
riences become marked in our brains such that they are automatically
triggered if relevant stimuli are present. Rather than reanalyzing information,
our brains use emotional markers as shortcuts for determining the proper
responses.44 As we interact in the world, certain stimuli trigger somatic
markers and lead to automatic processing that leads to judgments of
potential actions. A sociologist looks at this finding and expects that, given
patterns in the world for where we interact and who we interact with, our
intuitions and somatic markers will occur somewhat predictably. According
to this line of thought, past experience with particular moral issues—such
as giving money to a homeless person—will be marked with certain intuitive
associations and quickly give rise to an inclination to act in a certain manner
in other, similar situations. If we have a positive association with giving
money based on past experience, then our current intuition might be to give
money to this stranger. A negative experience, more likely to stick with us,
will lead to a gut-level aversion to a related current stimulus. We might
reason through the various relevant moral principles (self-sufficiency, charity
to strangers) and make a decision that differs from our initial feeling, but
likely the judgment occurred before we entered into the slower process of
moral reasoning that sent us looking for confirmatory reasons.
The dual-attitude model builds on our minds’ dual-processing capac-
ities.45 We can have more than one attitude toward social objects, including
moral concerns. The implicit judgments we make arise faster and are harder
to change than our deliberative opinions. Gut feelings are often given
priority, with people reporting intuitions as perfectly valid inputs for deal-
ing with moral dilemmas even as they acknowledge intuitions as far from
infallible.46 We are often unaware that we are using biased Lawyer Logic in
our deliberation, rather than the unbiased reasoning we think we are engag-
ing in. Our first order-system, the “adaptive unconscious,” operates as a spin
doctor leading us to justify conclusions we are predisposed to want to reach.47
Values can be interpreted as articulations of deep-seated intuitions or
somatic markers about which possible Bright Lights are the most personally
attractive. Given the range of possible human values, some values will seem
more intuitively appealing than others, more indicative of who we feel we
“really” are.48 Some of us respond more to ideals of benevolence (taking care
of others in our lives), while others find issues related to personal security as
evoking stronger intuitions linked to more powerful somatic markers. We
judge others based on these intuitions without always being consciously
82 Moral Selves, Evil Selves
aware that we do so. We rarely articulate and reason about caring for friends
that share our values; our feelings just seem right, and we intuit a com-
monality with these friends. If pressed, we might articulate these shared
preferences, but we rarely have to do so.
Values are often treated as truisms, unreflective ways of seeing the world
that are rarely questioned.49 People do not always offer good reasons for
their values; an intuition anchored in the feeling evoked along the lines of
an important (if abstract) value supersedes a need to reason through that
value. It just feels self-evidently right.50 Values are limited, recall, and priv-
ileging one value often means downplaying another, given the necessary
trade-offs that occur in complex human lives. But if we can get somebody
to consciously think through the reasons that justify their values, they are
more likely to keep to those values when situational pressures would oth-
erwise lead them to behave contrary to those values.51 People who have been
forced to think through and articulate support for their values are more
likely to engage in behaviors that express those values.52 Even here, though,
reasoning is not employed in an unbiased assessment of our values, but
rather to build support for the first-order gut sense that certain abstract
aims are preferable to others. Moral intuitions are not just flashes of cold
insight, but rather take the form of recognizable emotions that suggest
culturally appropriate potential responses.
Moral Emotions
Emotions motivate human life.53 They motivate what we do, determine
how we interact, fuel conflict, and recommend reconciliation.54 This has
not always been highlighted in the social sciences, though classic thinkers
such as Adam Smith and David Hume believed emotions were central to
understanding people, and strides have been lately made in correcting the
more recent omission of emotions.55
Far from being the province of illogical people, emotions end up being vital
for the capacity to make (supposedly cold) rational judgments.57 They are
Processes of Conscience 83
signals about how we perceive current concerns and how those concerns
fit into our ongoing lives: “Through their emotions, people comment, to
themselves if not to others, on what the interaction that is occurring says
about themselves in a given scene, and they also comment on the overall
stories that they are constructing as they shape a path through life.”58
Moral emotions are a subset of moral intuitions, motivational experiences
elicited by some trigger that conjure forth an instant, preconscious feeling.
Moral emotions are the link between moral standards (Chapter 3) and
moral behavior.59 Moral emotions require the integration of three aspects
of the mind: long-term planning structures, perceptions of the current
environment, and central motive states (behavior-related emotions).60 We
have especially strong emotional reactions when our core values and moral
beliefs are threatened,61 such as righteousness, ridicule, and vengeance.62
In Western culture, emotions are a purported window into the
“authentic” self.63 Social scientists get a better window into people’s moral
senses by asking about their emotions rather than asking them to justify
moral behavior by appealing to abstract moral codes.64 Normal people
are more likely to discuss feelings of shame, resentment, and pride, feelings
that suggest moral judgments, if not articulated, fully developed ethical
philosophies. Emotions represent responses to concrete social situations and
for a long time have been left out of discussions about (supposedly objective)
moral reasoning. But real-life moral conflicts are draining; they are not
treated as just abstract problems.65 They are draining precisely because of
the important emotional signals they conjure. Truly difficult moral dilemmas
do not have a clear-cut answer.
There is a finite list of moral emotions. Rather than reinvent another
wheel, I offer here an overview of Haidt’s four-family typology of moral
emotions:66 (a) other-condemning, (b) self-conscious, (c) other-suffering,
and (d) other-praising.
Other-Condemning Emotions
These emotions motivate people to change relations with those who are
perceived as having violated important relationships or moral codes.
They include anger, contempt, and disgust. There is cross-cultural evidence
supporting the thesis that anger stems from violations of autonomy, con-
tempt from violations of community standards, and disgust from violations
of divinity or purity.67 Anger involves short-term attack responses but
long-term reconciliation, while contempt is characterized by short- and
long-term exclusion and rejection.68 Moral anger (at the violation of a moral
standard) can be distinguished from personal anger (at being harmed) and
empathetic anger (at seeing someone else harmed).69 Disgust appears related
84 Moral Selves, Evil Selves
Self-Conscious Emotions
Other-Suffering Emotions
Other-Praising Emotions
These emotions are the more positive set of moral emotions and reflect
the fact that humans appear to be appreciative of others’ positive moral
actions. Gratitude, awe, and elevation have all been less studied than more
negative emotions. Gratitude seems to have three moral functions: as a
barometer of moral relationships, as a motivating force, and as positive
reinforcement.89 We might add trust to that group, as trust appears nec-
essary for the development of a moral community.90 Individual levels of
trust appear shaped more by ongoing social experiences than by individual
predispositions,91 again suggesting the importance of social factors for moral
functioning.
This cursory overview of emotions is intended simply to give a modicum
of specificity to the later discussion of how internal moral reactions have
something to do with later action and self-interpretation. Specific emotions
signal us about the potential disjuncture between what is going on and
what was expected and implicates visions of who we are and who we want
to be. Emotions alone, however, are not enough to motivate moral action
above instinctive reactions such as disgust. For positive behaviors, especially,
moral emotions need to be shaped and refined according to specific social
and cultural frameworks that label the emotions and channel them
toward certain ends.92 This involves what is known in the sociological lit-
erature as “framing” and bridges the definitions of social reality given to
us by our assorted cultures and the emotional feelings and moral intuitions
we develop within those definitions.
Framing Intuitions
The sociologist Erving Goffman articulated the notion of framing to explain
how the social world influences an individual’s cognitive processes within
any given situation.93 His goal was not to describe how individuals think
but to explain how people within concrete situations “bracket” their pos-
sible understandings to interpreting the world through the relevant frame.
Upon meeting somebody, we try to define the potential interaction. If we
frame them as a potential business partner we will present ourselves dif-
ferently than if we frame the meeting as a one-time encounter. We inter-
pret their actions and statements through this frame and bracket off other
possible interpretations. The social rules for meeting business partners
effectively rule out all sorts of things we might do in a different situation:
yawn, flirt, interrupt, disclose personal information, or gossip about our
company secrets. Frames are a way of thinking about the local rules, eti-
quette, and norms that guide an interaction. They “rule out” potential
86 Moral Selves, Evil Selves
frames applied by the actor as well as by others who might judge that
action.
Political debates offer concrete examples of framing in action. Discourses
about, for example, the nature of abortion (“choice” vs. “life”) are intended
to conjure up a particular frame and draw on a morally tinged set of
informational assumptions that are suspected to lead to particular if/then
moral judgments. Likely, most Americans value both the importance of
giving people choice in their lives and supporting life. But the abortion
discourse appeals to one or the other value, rendering that interpretation
and its accordant sense of moral rightness or outrage as paramount.
Depending on how broadly we frame an issue, we can appeal to general
moral sentiments that make it seem as if only our side has the moral high
ground. The interesting social science issue is how different frames compete
for perfectly valid moral intuitions in the same person. Depending on the
framing of an issue, different moral emotions can be conjured forth, espe-
cially when somebody is unfamiliar with an issue. As we get informed
about issues, we build more complicated frames and develop informational
assumptions that link up to broader values, largely reflecting our Bright
Lights but potentially altering them. Informational assumptions are not
simply interchangeable with our broader moral principles, though they
are difficult to empirically disentangle.108
Values are the Brightest Lights, the most abstract principles that guide
the development of particular worldviews, frames, and if/then behavioral
predispositions. Values frame cognition, but the range of human values are
multiple, conflicting, and potentially ambiguous.109 Values are broad frames
that resonate with particular if/then reactions. Framing an issue along a value
of “life” (“benevolence,” in the values literature) might contradict—in a
specific case—with a value of “autonomy” or “choice,” depending on which
principle we prioritize; we will then tease out a particular moral judgment.
Certain words preconsciously register as more personally relevant, and if
tradeoffs are necessary, they will be given priority. But the range of human
values suggests that different frames can call up meaningful value-intuitions
in anyone, if we just frame an issue in terms that resonate with that person’s
informational assumptions. We all value, to some extent, personal freedom
and security; the political issue is how to juggle potential tradeoffs and
how to motivate voters for our side by appealing to these intuitions. This
is where informational assumptions play in, for example, whether someone
defines a fetus as a “person” or not.
Moral considerations are both different and take primacy from other
considerations we incorporate into our potential if/then procedures. For
example, take adolescents who commit crime. Changing formal sanctions
do not shift adolescents’ criminal behaviors nearly as much as shifts in their
Processes of Conscience 89
Moral judgments are often more than intuitions; they involve concepts about
different groups, social relationships, perspectives on society, and distinctions
between when rights should be applied and when they should be denied
…[such] reactions are complex and involve reasoning about rights, fairness,
and welfare, as well as about the injustices of the dominance and power
exerted by one group on another.117
The actual process is muddier than can be properly captured, here, especially
when considering that we can, over time, reconsider our intuitive responses
and even alter them. An interesting research question involves why we so
often fail to do so and are content with our already developed worldviews
and so rarely challenge them. The larger point, motivating this book, is
that we cannot meaningfully talk about human beings without engaging
this moral dimension. To be human is precisely to take moral stands and
have moral reactions to potentially moral issues.
From a sociological perspective, we become settled in the frames we use
to interpret the world around us largely as we become more embedded in
sets of particular relationships and social positions. Part of our if/then
Processes of Conscience 91
our actions, and different signals will compel different responses, depending
on cultural and situational norms. But emotions are linked to informational
assumptions and to frames.
Some emotions are more under voluntary control than others; we may
feel shame based on things we cannot control, such as where our parents
or even our ancestors come from. These negative feelings, whether due to
our own actions or because people or groups that we align ourselves with
do poorly (like a sports team or political party), signal a crossing of a Bright
Line. Feelings of pride, on the other hand, might need to be downplayed to
be polite in a particular encounter, but suggest we have taken strides toward
a Bright Light, or successfully refrained from crossing a Bright Line. The
feelings a particular situation calls forth implicate our first-order processing,
monitoring situations through filters of socially shaped worldviews and
informational assumptions. Feelings do not always involve moral issues,
obviously, but anything we say or do is at least susceptible to moral eval-
uation by us or others.
The tools are in place for an abstract understanding of morality, cog-
nition, and feeling. We have a sense of how we judge ourselves and others
and how society shapes the assumptions underlying those judgments in
patterned ways. Let us move from abstract definitions into a more concrete
realm. Morality is central to how people think and perceive the world. If
this is the case, why do people so often act in immoral ways, or at least fail
to live up to their moral standards? Chapters 5 and 6 deal with this potential
conundrum on two different levels: first, how social situations influence
what we do, and second, how our social groups shape our perceptions. What
we think (and feel) tells us something about what we will do, but not as
much as we popularly believe.