Introduction To Brain and Behavior
Introduction To Brain and Behavior
Behavior
Bryan Kolb Ian Q. G. Campbell
University of Whishaw Teskey
Lethbridge University of Lethbridge University of Calgary
Fifth Edition
This book is dedicated to Cornelius H. “Case” Vanderwolf
(1935–2015), with whom each of us studied. Case did his Ph.D.
with Donald O. Hebb and postdoctoral research with Konrad
Ackert and Roger Sperry, figures whose research is featured
prominently in this book. He was an advocate of the theory that
the study of behavior provided the window to the organization of
the brain, and he was especially encouraging of our first attempt
at book writing.
Publisher, Psychology and Sociology: Rachel Losh
Senior Acquisitions Editor: Daniel DeBonis
Development Editor: Barbara Brooks
Assistant Editor: Katie Pachnos
Senior Marketing Manager: Lindsay Johnson
Marketing Assistant: Morgan Ratner
Executive Media Editor: Rachel Comerford
Associate Media Editor: Jessica Lauffer
Director, Content Management Enhancement: Tracey Kuehn
Managing Editor, Sciences and Social Sciences: Lisa Kinne
Project Editor: Edgar Doolan
Media Producer: Elizabeth Dougherty
Copy Editor: Kate Daly
Senior Photo Editor: Cecilia Varas
Photo Researcher: Richard Fox
Senior Production Manager: Paul Rohloff
Director of Design, Content Management Enhancement: Diana Blume
Design Manager: Blake Logan
Interior Design: Charles Yuen
Art Manager: Matthew McAdams
Illustration Coordinator: Janice Donnola
New Illustrations: Evelyn Pence, Matthew McAdams
Composition: codeMantra
Printing and Binding: RR Donelley
Cover Art: ALFRED PASIEKA/SCIENCE PHOTO LIBRARY/Getty
Images
Chapter Opener Illustrations: Katherine Streeter
Library of Congress Control Number: 2015957301
ISBN-13: 978-1-4641-0601-9
ISBN-10: 1-4641-0601-0
Copyright © 2016, 2014, 2011, 2006 by Worth Publishers
All rights reserved.
Printed in the United States of America
First printing
Worth Publishers
One New York Plaza
Suite 4500
New York, New York 10004-1562
www.macmillanhighered.com
ABOUT THE AUTHORS
Bryan Kolb received his Ph.D.
from The Pennsylvania State
University in 1973. He conducted
postdoctoral work at the University
of Western Ontario and the
Montreal Neurological Institute.
He moved to the University of
Lethbridge in 1976, where he is
Professor of Neuroscience and
holds a Board of Governors Chair
in Neuroscience. His current
research examines how neurons of
the cerebral cortex change in
response to various factors,
including hormones, experience, Deborah Muirhead
psychoactive drugs, neurotrophins,
and injury, and how these changes are related to behavior in the normal
and diseased brain. Kolb has received the distinguished teaching medal
from the University of Lethbridge. He is a Fellow of the Royal Society
of Canada and of the Canadian Psychological Association (CPA), the
American Psychological Association, and the Association of
Psychological Science. He is a recipient of the Hebb Prize from CPA and
from the Canadian Society for Brain, Behaviour, and Cognitive Science
and has received four honorary doctorates. He is a Senior Fellow of the
Experience-Based Brain and Behavioral Development program of the
Canadian Institute for Advanced Research. He and his wife train and
show horses in Western riding performance events.
Ian Q. Whishaw received his Ph.D. from Western University and is a
Professor of Neuroscience at the University of Lethbridge. He has held
visiting appointments at the University of Texas, University of Michigan,
Cambridge University, and the University of Strasbourg. He is a fellow
of Clair Hall, Cambridge, and of the Canadian Psychological
Association, the American Psychological Association, and the Royal
Society of Canada. He is a recipient of the Canadian Humane Society
Bronze Medal for bravery, the Ingrid Speaker Gold Medal for research,
and the distinguished teaching medal from the University of Lethbridge.
He has received the Key to the City of Lethbridge and has honorary
doctorates from Thompson Rivers
University and the University of
Lethbridge. His research addresses
the neural basis of skilled
movement and the neural basis of
brain disease, and the Institute for
Scientific Information includes
him in its list of most cited
neuroscientists. His hobby is
training horses for Western
performance events.
G. Campbell Teskey received
his Ph.D. from Western University
in 1990 and then conducted David Benard
postdoctoral work at McMaster
University. In 1992 he relocated to
the University of Calgary, where
he is a professor in the Department
of Cell Biology and Anatomy and
the Hotchkiss Brain Institute. His
current research program examines
the development, organization, and
plasticity of the motor cortex, as
well as how seizures alter brain
function. Teskey has won
numerous teaching awards, has
developed new courses, and is a
founder of the Bachelor’s of
Science in Neuroscience program
at his home university. He serves Tannis Teskey
as Education Director for the
Hotchkiss Brain Institute and chairs the Education Committee of
Campus Alberta Neuroscience. His hobbies include hiking, biking,
kayaking, and skiing.
CONTENTS IN BRIEF
Preface
Media and Supplements
Developmental Disability
8-5 How Do Any of Us Develop a Normal Brain?
SUMMARY
KEY TERMS
Stroke
Epilepsy
Multiple Sclerosis
Neurodegenerative Disorders
Are All Degenerative Dementias Aspects of a Single Disease?
Age-Related Cognitive Loss
16-4 Understanding and Treating Psychiatric Disorders
Schizophrenia Spectrum and Other Psychotic Disorders
Mood Disorders
RESEARCH FOCUS 16-4 Antidepressant Action and Brain Repair
Anxiety Disorders
16-5 Is Misbehavior Always Bad?
SUMMARY
KEY TERMS
ANSWERS TO SECTION REVIEW SELF-TESTS
GLOSSARY
REFERENCES
NAME INDEX
SUBJECT INDEX
PREFACE
The Fifth Edition of An Introduction to Brain and Behavior continues
to reflect the evolution of behavioral neuroscience. In keeping with this
evolution, we welcome G. Campbell Teskey, whose fresh perspective—
especially on topics related to neurophysiology and nervous system
disorders—enhances our author team.
Other major changes in this edition include a deeper emphasis on
genetics and epigenetics throughout. Epigenetics is especially important
for understanding brain and behavior because environmentally induced
modifications in gene expression alter the brain and ultimately
behavioral development. Thus, experience—especially early experience
—modifies how brain development unfolds. These modifications—of at
least some behavioral traits—can be transferred across generations, a
process known as epigenetic inheritance. We introduce it in the case
study at the end of Section 3-3 .
This edition fully addresses advances in imaging technology,
including techniques that are fueling the burgeoning field of
connectomics and progress toward a comprehensive map of neural
connections—a brain connectome. These exciting advances are
especially relevant in the second half of the book, where we review
higher-level functions.
Imaging advances and epigenetics concepts and research continue as a
prime focus in this revision but are not our sole focus. In Chapter 2 we
introduce the enteric nervous system, which controls the gut, and later
chapters elaborate on ENS functioning. Section 7-1 introduces the
emerging field of synthetic neurobiology, elaborated in Section 16-2 .
Section 5-2 adds new research on lipid neurotransmitters and detail on
receptor subtypes.
The range of updates and new coverage in the Fifth Edition text and
Focus features is listed, chapter by chapter, in the margins of these
Preface pages. See for yourself the breadth and scope of the revision;
then read on to learn more about the big-picture improvements in the
Fifth Edition.
With encouraging feedback from readers, the book’s learning
apparatus continues to feature sets of self-test questions at the end of the
major sections in each chapter. These Section Reviews help students
track their understanding as they progress. Answers appear at the back of
the book.
We continue to expand the popular margin notes. Beyond offering
useful asides to the text narrative, these marginalia increase the reader’s
ease in finding information, especially when related concepts are
introduced early in the text then elaborated on in later chapters. Readers
can return quickly to an earlier discussion to refresh their knowledge or
jump ahead to learn more. The margin notes also help instructors move
through the book to preview later discussions.
The illustrated Experiments, one of the book’s most popular features,
show readers how researchers design experiments, that is, how they
approach the study of brain–behavior relationships. The Basics features
let students brush up or get up to speed on their science foundation—
knowledge that helps them comprehend behavioral neuroscience.
We have made some big changes, yet much of the book remains
familiar. In shaping content throughout, we continue to examine the
nervous system with a focus on function, on how our behavior and our
brain interact, by asking key questions that students and neuroscientists
ask:
• Why do we have a brain?
• How is the nervous system organized, functionally as well as
anatomically?
• How do drugs and hormones affect our behavior?
• How does the brain learn?
• How does the brain think?
FIFTH EDITION UPDATES
CHAPTER 1: ORIGINS
NEW: locked-in syndrome in §1-1 features the case of Martin Pistorius.
NEW: Figure 1-12 , Neanderthal Woman, and NEW text describes H.
sapiens sapiens–H. neanderthalis intermingling.
NEW discussion in §1-4 and Comparative Focus 1-3 explain brain cell
packing density.
UPDATED coverage, Altered Maturation, in §1-4, simplifies the
concept neoteny.
NEW section, Acquisition of Culture, in §1-5, introduces the concept
memes.
CHAPTER 2: NEUROANATOMY
NEW: Research Focus 2-1 , Agenesis of the Cerebellum.
UPDATED Figure 2-2 charts ENS in anatomic and functional nervous
system organization.
NEW: §2-5 introduces the enteric nervous system, diagrammed in
NEW Figure 2-31 .
CHAPTER 5: NEUROTRANSMISSION
NEW in §5-2: Lipid Transmitters focuses on endocannabinoids ;
Varieties of Receptors introduces subunits plus NEW Table 5-3 , Small-
Molecule Transmitter Receptors.
NEW in §5-3: enteric nervous system–CNS autonomy.
REVAMPED: neural bases of habituation and sensitization responses in
Aplysia, in §5-4.
CHAPTER 8: DEVELOPMENT
NEW Clinical Focus 8-1, Linking SES to Cortical Development, sets a
chapterwide theme.
NEW in §8-4: infant sexual differentiation now introduces Hormones
and Brain Development; Gut Bacteria and Brain Development reveals
the microbiome’s influence.
UPDATED in §8-4: fetal exposure statistics in Drugs and Brain
Development; coverage of SIDS.
Scientific Background Provided
We describe the journey of discovery that is neuroscience in a way that
students just beginning to study the brain and behavior can understand;
then they can use our clinical examples to tie its relevance to the real
world. Our approach provides the background students need to
understand introductory brain science. Multiple illustrated Experiments
in 13 chapters help them visualize the scientific method and how
scientists think. The Basics features in 6 chapters address the fact that
understanding brain function requires understanding information from all
the basic sciences.
These encounters can prove both a surprise and a shock to students
who come to the course without the necessary background. The Basics
features in Chapters 1 and 2 address the relevant evolutionary and
anatomical background. In Chapter 3 , The Basics provides a short
introduction to chemistry before the text describes the brain’s chemical
activities. In Chapter 4 The Basics addresses electricity before exploring
the brain’s electrical activity.
Readers already comfortable with the material can easily skip it; less
experienced readers can learn it and use it as a context for neuroscience.
Students with this background can tackle brain science with greater
confidence. Similarly, for students with limited knowledge of basic
psychology, we review such facts as stages of behavioral development in
Chapter 8 and forms of learning and memory in Chapter 14 .
Students in social science disciplines often remark on the amount of
biology and chemistry in the book, whereas an equal number of students
in biological sciences remark on the amount of psychology. More than
half the students enrolled in the Bachelor’s of Science in Neuroscience
program at the University of Lethbridge have switched from a
biochemistry or psychology major after taking this course. We must be
doing something right!
Chapter 7 showcases the range of methods behavioral neuroscientists
use to measure and manipulate brain and behavior—traditional methods
and such cutting-edge techniques as optical tomography, resting-state
fMRI, chemogenetics, and DREADD. Expanded discussions of
techniques appear where appropriate, especially in Research Focus
features, including Focus 3-2, Brainbow: Rainbow Neurons; Focus 4-3,
Optogenetics and Light-Sensitive Channels; and Focus 16-1,
Posttraumatic Stress Disorder, which includes treatments based on
virtual reality exposure therapies.
Finally, because critical thinking is vital to progress in science, select
discussions throughout the book center on relevant aspects. Section 1-2
concludes with The Separate Realms of Science and Belief. Focus 15-3,
The Rise and Fall of Mirror Neurons, demonstrates how the media—and
even scientists—can fail to question the validity of research results.
Section 12-5 introduces the idea that gender identity comprises a broad
spectrum rather than a female–male dichotomy. Section 7-7 considers
issues of animal welfare in scientific research and the use of laboratory
animal models to mimic human neurologic and psychiatric disorders.
Clinical Focus Maintained
Neuroscience is a human science. Everything in this book is relevant to
our lives, and everything in our lives is relevant to neuroscience.
Understanding neuroscience helps us understand how we learn, how we
develop, and how we can help people with brain and behavioral
disorders. Knowledge of how we learn, how we develop, and the
symptoms of brain and behavioral disorders offer insights into
neuroscience.
Clinical material also helps to make neurobiology particularly
relevant to students who are going on to a career in psychology, social
work, or another profession related to mental health, as well as to
students of the biological sciences. We integrate clinical information
throughout the text and Clinical Focus features, and we expand on it in
Chapter 16 , the book’s capstone, as well.
Assessment Tools
Downloadable Diploma Computerized Test Bank Prepared and
revised by Christopher Striemer of Grant MacEwan University, the Test
Bank includes a battery of more than 1300 multiple-choice and short-
answer test questions, as well as diagram exercises. Each item is keyed
to the page in the textbook on which the answer can be found. All the
questions have been thoroughly reviewed and edited for accuracy and
clarity. The Diploma software allows users to add, edit, scramble, or
reorder items. The Test Bank also allows you to export into a variety of
formats that are compatible with many Internet-based testing products.
For more information on Diploma, please visit
https://blackboard.secure.force.com . The Test Bank files can be
downloaded from Worth’s online catalog at
www.macmillanhighered.com .
Presentation
Illustration Slides and Lecture Slides Available for download from
www.macmillanhighered.com , these slides can either be used as they are
or customized to fit the needs of your course. There are two sets of slides
for each chapter. The Illustration slides feature all the figures, photos,
and tables. The Lecture slides, prepared and revised by Matthew
Holahan of Carleton University, feature main points of the chapter with
selected figures and illustrations.
CHAPTER
Century
Fred Linge emerged profoundly changed from his journey of learning to
live with traumatic brain injury (TBI). The purpose of this book is to
take you on a journey toward understanding the link between brain and
behavior: how the brain is organized to produce behavior. Evidence
comes from studying three sources: (1) the evolution of brain and
behavior in diverse animal species, (2) how the brain is related to
behavior in typical people, and (3) how the brain changes in people with
brain damage or other brain dysfunction. The knowledge emerging from
these three lines of study is changing how we think about ourselves, how
we structure education and our social interactions, and how we aid those
with brain injury, disease, and disorder.
traumatic brain injury (TBI) Wound to the brain that results from
a blow to the head.
Illustrated Experiments through the book reveal how neuroscientists
conduct research, beginning with Experiment 1-1 in Section 1-2 .
We will marvel at the potential for future discoveries. We will begin
to understand how genes influence the brain’s structure and activity. We
will learn how our experience in turn changes our genes. We will review
developments in brain imaging techniques that allow us to watch our
own brain in action as we think and solve problems or sleep. We will
consider the goals of brain–behavior research in arresting the progress of
brain disease and finding cures for brain disease and injury. We will
marvel at the injured brain interacting with machines that serve as
prosthetics. We will consider the possibility of repairing and even
replacing malfunctioning brains. We will also consider the possibility of
interacting with artificial brains—brains of our making that can, in
principle, match our intelligence or perhaps even surpass it. Our journey
will broaden your understanding of what makes us human.
System The brain and spinal cord together make up the central
nervous system. All of the nerve processes radiating out beyond the brain
and spinal cord and all of the neurons outside the CNS connect to sensory
receptors, muscles, and internal body organs to form the peripheral
nervous system.
Together, the brain and spinal cord make up the central nervous
system (CNS). The CNS is encased in bone, the brain by the skull and
the spinal cord by the backbone, or vertebrae. The CNS is called central
both because it is physically the nervous system’s core and is as well the
core structure mediating behavior. All of the processes radiating out
beyond the brain and spinal cord constitute the peripheral nervous
system (PNS).
central nervous system (CNS) The brain and spinal cord, which
together mediate behavior.
peripheral nervous system (PNS) All of the neurons in the body
outside the brain and spinal cord; provides sensory and motor
connections to and from the central nervous system.
The human nervous system is composed of cells, as is the rest of the
body, and these nerve cells, or neurons, control behavior most directly.
Neurons in the brain communicate with one another, with sensory
receptors in the skin, with muscles, and with internal body organs. As
shown in Figure 1-2 , the human brain comprises two major sets of
structures. The cerebrum (forebrain), shown in Figure 1-2A , has two
nearly symmetrical halves, called hemispheres, one on the left and one
on the right. The cerebrum is responsible for most of our conscious
behaviors. It enfolds the brainstem (Figure 1-2B ), a set of structures
responsible for most of our unconscious behaviors. The second major
brainstem structure, the cerebellum, is specialized for learning and
coordinating our movements. Its conjoint evolution with the cerebrum
suggests that it assists the cerebrum in generating many behaviors.
neuron Specialized nerve cell engaged in information processing.
cerebrum (forebrain) Major structure of the forebrain that consists
of two mirror-image hemispheres (left and right) and is responsible
for most conscious behavior.
hemisphere Literally, half a sphere, referring to one side of the
cerebrum.
brainstem Central structure of the brain; responsible for most
unconscious behavior.
cerebellum Major brainstem structure specialized for learning and
coordinating movements; assists the cerebrum in generating many
behaviors.
(A) Cerebrum (forebrain)
(B) Right hemisphere of cerebrum
FIGURE 1-2 The Human Brain (A) Shown head-on, as oriented
within the human skull, are the nearly symmetrical left and right
hemispheres of the cerebrum. (B) A cut through the middle of the brain
from back to front reveals the right hemispheres of the cerebrum and
cerebellum and the right side of the brainstem. The spinal cord (not
shown) emerges from the base of the brainstem. Chapter 2 describes the
brain’s functional anatomy.
What Is Behavior?
Irenäus Eibl-Eibesfeldt began his textbook Ethology: The Biology of
Behavior, published in 1970, with the following definition: “Behavior
consists of patterns in time.” These patterns can be made up of
movements, vocalizations, or changes in appearance, such as the facial
movements associated with smiling. The expression patterns in time
includes thinking. We cannot directly observe someone’s thoughts. The
changes in the brain’s electrical and biochemical activity that are
associated with thought show, however, that thinking, too, is a behavior
that forms patterns in time.
The behavioral patterns of animals vary enormously. Animals produce
behaviors that consist of inherited responses, and they also produce
learned behaviors. Most behaviors consist of a mix of inherited and
learned actions. Figure 1-3 illustrates the contributions of mainly
inherited and mainly learned behavior in the eating behavior of two
animal species, crossbills and roof rats.
A crossbill’s beak is specifically designed to open pine cones.
This behavior is innate.
A baby roof rat must learn from its mother how to eat pine
cones. This behavior is learned.
Descartes’s thesis that the mind directed the body was a serious
attempt to give the brain an understandable role in controlling behavior.
This idea that behavior is controlled by two entities, a mind and a body, is
dualism (from Latin, meaning two ). To Descartes, the mind received
information from the body through the brain. The mind also directed the
body through the brain. The rational mind, then, depended on the brain
both for information and to control behavior.
dualism Philosophical position that both a nonmaterial mind and a
material body contribute to behavior.
Problems plague Descartes’s dualistic theory. It quickly became
apparent to scientists that people who have a damaged pineal body or
even no pineal body still display typical intelligent behavior. Today, we
understand that the pineal gland’s role in behavior is relegated to
biological rhythms; it does not govern human behavior. We now know
that fluid is not pumped from the brain into muscles when they contract.
Placing an arm in a bucket of water and contracting its muscles does not
cause the water level in the bucket to rise, as it should if the volume of the
muscle increased because fluid had been pumped into it. We now also
know that there is no obvious way for a nonmaterial entity to influence
the body: doing so requires the spontaneous generation of energy, which
violates the physical law of conservation of matter and energy.
The difficulty in Descartes’s theory of how a nonmaterial mind and a
physical brain might interact has come to be called the mind–body
problem. Nevertheless, Descartes proposed that his theory could be
tested to determine whether an organism possessed a mind, Descartes
proposed the language test and the action test. To pass the language test,
an organism, or even an intelligent machine such as a robot, must use
language to describe and reason about things that are not physically
present. The action test requires behavior based on reasoning, not just an
automatic response to a particular situation. Descartes proposed that
nonhuman animals and machines would be unable to pass the tests
because they lacked a mind.
mind–body problem Difficulty of explaining how a nonmaterial
mind and a material body interact.
The contemporary version of Descartes’ language test, the Turing test,
is named for Alan Turing, an English mathematician. In 1950, Turing
proposed that a machine could be judged conscious if a questioner could
not distinguish its answers from a human’s. Machines are close to passing
the Turing test; some might argue that it’s happened. Experimental
research also casts doubt on Descartes’s view that nonhuman animals
cannot pass the language and action tests. Studies of language in apes and
other animals partly seek to discover whether other species can describe
and reason about things that are not present. Comparative Focus 1-2, The
Speaking Brain, summarizes a contemporary approach to studying
language in animals.
A 2014 film, The Imitation Game, dramatizes Turing’s efforts during
World War II to crack the Nazi’s Enigma code.
Descartes’s theory of mind led to bad results. Based on dualism, some
people argued that young children and the insane must lack minds,
because they often fail to reason appropriately. We still use the expression
he’s lost his mind to describe someone who is mentally ill. Some
proponents of dualism also reasoned that, if someone lacked a mind, that
person was simply a machine, not due respect or kindness. Cruel
treatment of animals, children, and the mentally ill has for centuries been
justified by Descartes’s theory. It is unlikely that Descartes himself
intended these interpretations. Reportedly he was very kind to his own
dog, Monsieur Grat.
FIGURE 1-5An Inherited Behavior People the world over display the same
emotional expressions that they recognize in others—these smiles, for example. This
evidence supports Darwin’s suggestion that emotional expression is an inherited
behavior.
Hebb’s claim dovetails with his theory of how the brain produces
consciousness. He suggested that learning is enabled by neurons forming
new connections with one another in the brain. He called the resulting
neuronal network a cell assembly. As the neural substrate for the learned
experience, cell assemblies interact: one cell assembly becomes
connected to another. This linking of cell assemblies is thus the linking
of memories, which to Hebb is what consciousness is.
Hebb’s argument is materialistic. The contemporary philosophical
school eliminative materialism takes the position that if behavior can be
described adequately without recourse to the mind, then the mental
explanation should be eliminated. Daniel Dennett (1978) and other
philosophers, who have considered such mental attributes as
consciousness, pain, and attention, argue that an understanding of brain
function can replace mental explanations of these attributes. Mentalism,
by contrast, defines consciousness as an entity, attribute, or thing. Let us
use the concept of consciousness to illustrate the argument for
eliminative materialism.
Recovering Consciousness: A Case Study
Darwin offered no suggestion about how the brain produces
consciousness, although his theory predicted that it must. One patient’s
case study offers insight into how the study of brain and behavior begins
to describe consciousness. The patient, a 38-year-old man, had lingered
in a minimally conscious state (MCS) for more than 6 years after an
assault. He was occasionally able to communicate with single words,
occasionally able to follow simple commands. He was able to make a
few movements but could not feed himself despite 2 years of inpatient
rehabilitation and 4 years in a nursing home.
minimally conscious state (MCS) Condition in which a person can
display some rudimentary behaviors, such as smiling or uttering a
few words, but is otherwise not conscious.
This patient is one of approximately 1.4 million people each year in
the United States who, as described by Fred Linge in Clinical Focus 1-1 ,
contend with TBI. Among them, as many as 100,000 may become
comatose; as few as 20 percent recover consciousness. Among the
remaining TBI patients, some are diagnosed as being in a persistent
vegetative state (PVS), alive but unable to communicate or to function
independently at even the most basic level. Their brain damage is so
extensive that no recovery can be expected. Others, such as the MCS
assault victim described earlier, are so diagnosed because behavioral
observation and brain imaging studies suggest that they do have a great
deal of functional brain tissue remaining.
persistent vegetative state (PVS) Condition in which a person is
alive but unaware, unable to communicate or to function
independently at even the most basic level.
Adrian Owen (2015) and his colleagues have found that by imaging
the brain of comatose patients they can assess the extent to which the
patients are conscious by the patterns of activity in their brain. Using an
imaging system that measures brain function in terms of oxygen use,
Owen’s group discovered that some comatose patients are actually
locked in, as was Martin Pistorius, whom you met in Section 1-1 .
Furthermore, these investigators devised ways to communicate with
conscious patients by teaching them a language that signals changes in
their brains’ activity patterns. For example, while imaging the brain of
control subjects, Owen’s group asks them to imagine hitting a tennis ball
with a racket. The group identifies the active brain region associated with
the imaginary act. They then ask patients to imagine hitting a tennis ball
and so determine from their brain images whether they understand. If the
patient understands the instruction and so demonstrates consciousness,
procedures for further communication and rehabilitation can begin.
On the rehabilitation front, Nicholas Schiff and his colleagues (Schiff
& Fins, 2007) reasoned that, if they could stimulate their MCS patient’s
brain by administering a small electrical current, they could improve his
behavioral abilities. As part of a clinical trial (a consensual experiment
directed toward developing a treatment), they implanted thin wire
electrodes in his brainstem so they could administer a small electrical
current.
clinical trial Consensual experiment directed toward developing a
treatment.
Through these electrodes, which are visible in the X-ray image shown
in Figure 1-6 , the investigators applied the electrical stimulation for 12
hours each day. The procedure is called deep brain stimulation (DBS).
The researchers found dramatic improvement in the patient’s behavior
and ability to follow commands. For the first time, he was able to feed
himself and swallow food. He could even interact with his caregivers and
watch television, and he showed further improvement in response to
rehabilitation.
deep brain stimulation (DBS) Neurosurgery in which electrodes
implanted in the brain stimulate a targeted area with a low-voltage
electrical current to facilitate behavior.
Zephyr/Science Source
Classification of Life
Taxonomy is the branch of biology concerned with naming and
classifying species by grouping representative organisms according to
their common characteristics and their relationships to one another.
As shown in the left column of the figure Taxonomy of Modern
Humans, which illustrates the human lineage, the broadest unit of
classification is a kingdom, with more subordinate groups being
phylum, class, order, family, genus, and species. This taxonomic
hierarchy is useful in helping us trace the evolution of brain cells and
the brain.
We humans belong to the animal kingdom, the chordate phylum, the
mammalian class, the primate order, the great ape family, the genus
Homo, and the species sapiens. Animals are usually identified by their
genus and species names. So we humans are called Homo sapiens
sapiens, meaning wise, wise human.
The branches in the figure Cladogram, which shows the taxonomy
of the animal kingdom, represent the evolutionary sequence
(phylogeny) that connects all living organisms. Cladograms are read
from left to right: the most recently evolved organism (animal) or trait
(muscles and neurons) is farthest to the right.
Of the five kingdoms of living organisms represented in the
cladogram, only the one most recently evolved, Animalia, contains
species with muscles and nervous systems. It is noteworthy that
muscles and nervous systems evolved together to underlie the forms of
movement (behavior) that distinguish members of the animal kingdom.
The figure Evolution of the Nervous System shows the taxonomy of
the 15 groups, or phyla, of Animalia, classified according to increasing
complexity of nervous systems and movement.
Taxonomy of Modern Humans
In proceeding to the right from the nerve net, we find that nervous
systems in somewhat more recently evolved phyla, such as flatworms,
have more complex structure. These organisms have heads and tails,
and their bodies show both bilateral symmetry (one half of the body is
the mirror image of the other) and segmentation (the body is composed
of similarly organized parts). The structure of the human spinal cord
resembles this segmented nervous system.
Cladogram
Frog
Bird
Human
FIGURE 1-8Brain Evolution The brains of representative chordate
species have many structures in common, illustrating a single basic brain
plan.
The cerebrum and the cerebellum are proportionately small and smooth
in the earliest evolved classes (e.g., fish, amphibians, and reptiles). In later-
evolved chordates, especially the birds and mammals, these structures are
much more prominent. In many large-brained mammals, both structures are
extensively folded, which greatly increases their surface area while allowing
them to fit into a small skull, just as folding a large piece of paper enables it
to occupy a small envelope.
Increased size and folding are particularly pronounced in dolphins and
primates, animals with large brains relative to their body size. Because
relatively large brains with a complex cerebrum and cerebellum have
evolved in a number of animal lineages, humans are neither unique nor
special in these respects. We humans are distinguished, however, in
belonging to the large-brained primate lineage and are unique in having the
largest, most complex brain in this lineage.
1-3 REVIEW
Evolution of Brains and of Behavior
Before you continue, check your understanding.
1 . Because brain cells and muscles evolved only once in the animal
kingdom, a similar basic pattern exists in the ___________ of all
animals.
2 . Evolutionary relationships among the nervous systems of animal
lineages are classified by increasing complexity, progressing from the
simplest ___________ to a ___________ segmented nervous system
to nervous systems controlled by ___________ to nervous systems in
the phylum ___________, which feature a brain and spinal cord.
3 . A branching diagram that represents groups of related animals is
called a ___________.
4 . Given that a relatively large brain with a complex cerebrum and
cerebellum has evolved in a number of animal lineages, what if
anything makes the human brain unique?
Answers appear at the back of the book.
Behavior
Anyone can see similarities among humans, apes, and monkeys. Those
similarities extend to the brain as well. In this section, we consider the
brain and behaviors of some more prominent ancestors that link ancestral
apes to our brain and our behaviors. Then we consider the relation
between brain complexity and behavior across species. We conclude by
surveying leading hypotheses about how the human brain evolved to
become so large and the behavior that it mediates so complex. The
evolutionary evidence shows that we humans are specialized in having an
upright posture, making and using tools, and developing language but that
we are not special, because our ancestors also shared these traits, at least
to some degree.
These early hominids were among the first primates to show distinctly
human traits, including walking upright and using tools. Scientists have
deduced their upright posture from the shape of their back, pelvic, knee,
and foot bones and from a set of fossilized footprints that a family of
australopiths left behind, walking through freshly fallen volcanic ash
some 3.8 million years ago. These footprints feature impressions of a
well-developed arch and an unrotated big toe—more like humans’ than
other apes’. (Nevertheless, australopiths retained the ability to skillfully
climb trees.) The bone structure of their hands evinces tool use (Pickering
et al., 2011).
Australian Raymond Dart coined Australopithecus in naming the
skull of a child he found among fossilized remains from a limestone
quarry near Taung, South Africa, in 1924. Choosing so to represent
his native land probably was no accident.
The first humans who spread beyond Africa migrated into Europe and
Asia. This species was Homo erectus (upright human ), so named because
of the mistaken notion that its predecessor, H. habilis, had a stooped
posture. Homo erectus first shows up in the fossil record about 1.6 million
years ago. As shown in Figure 1-11 , its brain was bigger than that of any
preceding hominid, overlapping in size the measurements of present-day
human brains. The tools made by H. erectus were more sophisticated than
those made by H. habilis. An especially small subspecies of H. erectus,
about 3 feet tall, was found on the Indonesian island of Flores. Named
Homo floresiensis, these hominids lived up to about 13,000 years ago
(Gordon et al., 2008).
FIGURE 1-11 Increases in Hominid Brain Size The brain of
Australopithecus was about the same size as that of living nonhuman apes,
but succeeding members of the human lineage display increased brain
size. Data from Johanson and Edey, 1981
Africa’s Great Rift Valley cut off ape species living in a wetter climate to the
west from species that evolved into hominids, adapted to a drier climate to
the east.
Paul A. Souders/Corbis
Spider monkey diet
DC_Columbia/Getty Images
Howler monkey diet
PHOTO24/Getty Images
FLPA/SuperStock
FIGURE 1-18 Neoteny The shape of an adult human’s head more closely
resembles that of a juvenile chimpanzee’s head (left) than an adult chimp’s
head (right). This observation leads to the hypothesis that we humans may
be neotenic descendants of our more apelike common ancestors.
Intelligence
In The Descent of Man, Charles Darwin detailed the following paradox:
No one, I presume, doubts the large proportion which the size of man’s brain bears to his
body, compared to the same proportion in the gorilla or orang, is closely connected with his
higher mental powers…. On the other hand, no one supposes that the intellect of any two
animals or of any two men can be accurately gauged by the cubic contents of their skulls.
(Darwin, 1871, p. 37)
Sea lamprey
Salamander
Acquisition of Culture
In evolutionary terms, the modern human brain developed rapidly. Many
behavioral changes differentiate us from our primate ancestors, and these
adaptations took place more rapidly still, long after the modern brain had
evolved. The most remarkable thing that our brains have made possible
is ever more complex culture —learned behaviors passed from
generation to generation through teaching and experience.
culture Learned behaviors that are passed on from one generation to
the next through teaching and imitation.
Cultural growth and adaptation render many contemporary human
behaviors distinctly different from those of Homo sapiens living 200,000
years ago. Only 30,000 years ago, modern humans made the first artistic
relics: elaborate paintings on cave walls and carved ivory and stone
figurines. Agriculture appears still more recently, about 15,000 years
ago, and reading and writing were invented only about 7000 years ago.
Saint Ambrose, who lived in the fourth century, is reportedly the
first person who could read silently.
Most forms of mathematics and many of our skills in using
mechanical and digital devices have still more recent origins. Early H.
sapiens brains certainly did not evolve to select smart phone apps or
imagine traveling to distant planets. Apparently, the things that the
human brain did evolve to do contained the elements necessary for
adapting to more sophisticated skills.
Alex Mesoudi and his colleagues (2006) suggest that cultural
elements, ideas, behaviors, or styles that spread from person to person—
called memes (after genes, the elements of physical evolution)—can also
be studied within an evolutionary framework. They propose that
individual differences in brain structure may favor the development of
certain memes. Once developed, memes would in turn exert selective
pressure on further brain development. For example, chance variations in
individuals’ brain structure may have favored tool use in some
individuals. Tool use proved so beneficial that toolmaking itself exerted
selective pressure on a population to favor individuals well skilled in tool
fabrication.
meme An idea, behavior, or style that spreads from person to person
within a culture.
Similar arguments can be made with respect to other memes, from
language to music, from mathematics to art. Mesoudi’s reasoning
supports neuroscience’s ongoing expansion into seemingly disparate
disciplines, including linguistics, the arts, business, and economics.
Studying the human brain, far from examining a body organ’s structure,
means investigating how it acquires culture and fosters adaptation as the
world changes and as the brain changes the world.
Section 15-3 explores some of psychology’s expanding frontiers.
1-5 REVIEW
Modern Human Brain Size and Intelligence
Before you continue, check your understanding.
1 . Behavior that is displayed by all members of a species is called
___________.
2 . Some modern human behavior is inherent to our nervous system, but
far more is learned—passed generation to generation by ___________.
Ideas, behaviors, or styles called ___________ may spread from
person to person and culture to culture.
3 . Spearman proposed a common intelligence factor he called
___________. Gardner supports the idea of ___________.
4 . Explain the reasoning behind the statement that what is true for
evolutionary comparisons across different species may not be true for
comparisons within a single species.
Answers appear at the back of the book.
Structure
The brain’s primary function is to produce behavior, or movement. To
produce behavior as we search, explore, and manipulate our
environment, the brain must absorb information about the world—about
the objects around us: their size, shape, and location. Without stimuli, the
brain cannot orient the body and direct it to produce an appropriate
response.
The nervous system’s organs are designed to admit information from
the world and to convert this information into biological activity that
produces perception, subjective experiences of reality. The brain thus
produces what we believe is reality so that we can move. This subjective
reality is essential to carrying out any complex task.
Principle 1: The nervous system produces movement in a perceptual
world the brain constructs.
When you answer the telephone, for example, your brain directs your
body to reach for it as the nervous system responds to vibrating air
molecules by producing the subjective experience of a ringtone. We
perceive this stimulus as sound and react to it as if it actually exists,
when in fact the sound is merely a fabrication of the brain. That
fabrication is produced by a chain reaction that takes place when
vibrating air molecules hit the eardrum. Without the nervous system,
especially the brain, sound does not exist—only the movement of air
molecules.
But there is more to hearing a phone’s ringtone than vibrating air
molecules. Our mental construct of reality is based not only on the
sensory information we receive but also on the cognitive processes we
might use to interact with that incoming information. Hearing a ringtone
when we are expecting a call has a meaning vastly different from its
ringing at three o’clock in the morning, when we are not expecting a call.
The subjective reality the brain constructs can be better understood by
comparing the sensory realities of two different kinds of animals. You
are probably aware that dogs perceive sounds that humans do not. This
difference in perception does not mean that a dog’s nervous system is
better than ours or that our hearing is poorer. Rather, the perceptual
world constructed by a dog brain simply differs from that of a human
brain. Neither experience is “correct.” The difference in subjective
experience is due merely to two differently evolved systems for
processing sensory stimuli.
Section 9-1 elaborates on the nature of sensation and perception.
When it comes to visual perception, our world is rich with color,
whereas dogs see very little color. Human brains and dog brains
construct different realities. Subjective differences in brains exist for
good reason: they allow different animals to exploit different features in
their environments. Dogs use their hearing to detect the movements of
prey, such as mice in the grass; early humans probably used color vision
for identifying ripe fruit in trees. Evolution, then, fosters adaptability,
equipping each species with a view of the world that helps it survive.
• The somatic nervous system (SNS) includes all the spinal and cranial
nerves carrying sensory information to the CNS from the muscles,
joints, and skin. It also transmits outgoing motor instructions that
produce movement.
somatic nervous system (SNS) Part of the PNS that includes the
cranial and spinal nerves to and from the muscles, joints, and skin,
which produce movement, transmit incoming sensory input, and
inform the CNS about the position and movement of body parts.
• The autonomic nervous system (ANS) balances the body’s internal
organs by producing the rest-and-digest response through the
parasympathetic (calming) nerves or the fight-or-flight response or
vigorous activity through the sympathetic (arousing) nerves.
autonomic nervous system (ANS) Part of the PNS that regulates
the functioning of internal organs and glands.
• The enteric nervous system (ENS), formed by a mesh of neurons
embedded in the lining of the gut, controls the gut. The ENS
communicates with the CNS via the ANS but mostly operates
autonomously.
enteric nervous system (ENS) Mesh of neurons embedded in the
lining of the gut, running from the esophagus through the colon;
controls the gut.
The directional flow of neural information is important. Afferent
(incoming) information is sensory, coming into the CNS or one of its
parts, whereas efferent (outgoing) information is leaving the CNS or one
of its parts. When you step on a tack, the afferent sensory signals are
transmitted from the body into the brain and felt as pain. Efferent signals
from the brain trigger a motor response: you lift your foot ( Figure 2-3 ).
afferent Conducting toward a CNS structure.
efferent Conducting away from a CNS structure.
FIGURE 2-3 Neural Information Flow
Spatial Orientation
Coronal section
Horizontal section
Sagittal section
(A) Plane of section
Living Art Enterprises/Science Source
Frontal view
Medial view
Anterior Near or toward the front of the animal or the front of the head (see also frontal
and rostral )
Caudal Near or toward the tail of the animal (see also posterior )
Coronal Cut vertically from the crown of the head down; used to reference the plane
of a brain section that reveals a frontal view
Frontal Of the front (see also anterior and rostral ); in reference to brain sections, a
viewing orientation from the front
Horizonta Cut along the horizon; used to reference the plane of a brain section that
l reveals a dorsal view
Medial Toward the middle, specifically the body’s midline; in reference to brain
sections, a side view of the central structures
Posterior Near or toward the animal’s tail (see also caudal ); for human spinal cord, at
the back
Rostral Toward the beak (front) of the animal (see also anterior and frontal )
Sagittal Cut lengthways from front to back of the skull to reveal a medial view into the
brain from the side; a cut in the midsagittal plane divides the brain into
symmetrical halves.
Ventral On or toward the belly of four-legged animals (see also inferior ); in reference
to human brain nuclei, below.
Your right hand, if made into a fist, represents the positions of the
lobes of the left hemisphere of your brain.
FIGURE 2-5 The Cerebral Cortex Each cerebral hemisphere is
divided into four lobes: frontal, parietal, temporal, and occipital, shown at left
as oriented in the head. The brain surface, or cerebral cortex, shown in the
frontal view, is a thin sheet of nerve tissue, heavily folded to fit inside the
skull. Your right fist can map the orientation of the left hemisphere and its
lobes.
Between the arachnoid layer and the pia mater flows cerebrospinal
fluid (CSF), a colorless solution of sodium chloride and other salts. CSF
cushions the brain so that it can move or expand slightly without pressing
on the skull. The symptoms of meningitis, an infection of the meninges and
CSF, are described in Clinical Focus 2-2 , Meningitis and Encephalitis, on
page 42 .
cerebrospinal fluid (CSF) Clear solution of sodium chloride and
other salts that fills the ventricles inside the brain and circulates around
the brain and spinal cord beneath the arachnoid layer in the
subarachnoid space.
Cerebral Geography
After removing the meninges, we can examine the brain’s surface features,
most prominently its two nearly symmetrical left and right hemispheres.
Figure 2-5 diagrams the left hemisphere of a typical human forebrain
oriented in the upright human skull. The outer forebrain consists of a thin,
folded film of nerve tissue, the cerebral cortex, detailed in the frontal view
in Figure 2-5 . The word cortex, Latin for tree bark, is apt, considering the
cortex’s heavily folded surface and its location, covering most of the rest of
the brain. Unlike the bark on a tree, however, the brain’s folds are not
random but rather demarcate its functional cortical zones.
cerebral cortex Thin, heavily folded film of nerve tissue composed of
neurons that is the outer layer of the forebrain. Also called neocortex.
Make a fist with your right hand and hold it up, as shown on the right in
Figure 2-5 , to represent the positions of the forebrain’s broad divisions, or
lobes, in the skull. Each lobe is named for the skull bone it lies beneath.
• The forward-pointing temporal lobe lies at the side of the brain, in
approximately the same place as the thumb on your upraised fist. The
temporal lobe functions in connection with hearing and with language
and musical abilities.
temporal lobe Part of the cerebral cortex that functions in connection
with hearing, language, and musical abilities; lies below the lateral
fissure, beneath the temporal bone at the side of the skull.
• Immediately above your thumbnail, your fingers correspond to the
location of the frontal lobe, often characterized as performing the brain’s
executive functions, such as decision making.
frontal lobe Part of the cerebral cortex often generally characterized
as performing the brain’s executive functions, such as decision
making; lies anterior to the central sulcus and beneath the frontal bone
of the skull.
• The parietal lobe is at the top of the skull, as represented by your
knuckles, behind the frontal lobe and above the temporal lobe. Parietal
functions include directing our movements toward a goal or to perform a
task, such as grasping an object.
parietal lobe Part of the cerebral cortex that directs movements
toward a goal or to perform a task, such as grasping an object; lies
posterior to the central sulcus and beneath the parietal bone at the top
of the skull.
• The area at the back of each hemisphere, near your wrist, constitutes the
occipital lobe, where visual processing begins.
occipital lobe Part of the cerebral cortex where visual processing
begins; lies at the back of the brain and beneath the occipital bone.
Examining the Brain’s Surface from All Angles
As we look at the dorsal view in Figure 2-6 A , the brain’s wrinkled left
and right hemispheres resemble a walnut meat taken whole from its shell.
These hemispheres constitute the cerebrum, the major forebrain structure
and most recently evolved feature of the CNS. Visible from the opposite
ventral view in Figure 2-6B are the brainstem, including the wrinkly
hemispheres of the smaller cerebellum (Latin for little brain ). Both the
cerebrum and the brainstem are visible in the lateral and medial views in
Figure 2-6C and D .
Lateral view
Medial view
Middle cerebral artery
Ventral view
Medial view
Posterior cerebral artery
FIGURE 2-7 Major Cerebral Arteries Each of the three major arteries
that feed blood to the cerebral hemispheres branches extensively to supply
the regions shaded in pink.
Three major arteries send blood to the cerebrum—the anterior, middle,
and posterior cerebral arteries, shown in Figure 2-7 . Because the brain is
highly sensitive to blood loss, a blockage or break in a cerebral artery is
likely to lead to the death of the affected region. This condition, known as
stroke, is the sudden appearance of neurological symptoms as a result of
severely interrupted blood flow. Because the three cerebral arteries supply
different parts of the brain, strokes disrupt different brain functions,
depending on the artery affected.
stroke Sudden appearance of neurological symptoms as a result of
severely interrupted blood flow.
Section 16-3 elaborates on the effects of stroke and its treatment.
Because the brain’s connections are crossed, stroke in the left
hemisphere affects sensation and movement on the right side of the body.
The opposite is true for those with strokes in the right hemisphere. Clinical
Focus 2-3 , Stroke, on page 45 , describes some disruptions that stroke
causes, both to the person who has it and to those who care for stroke
victims.
Principle 3: Many brain circuits are crossed.
Glauberman/Science Source
(A) Coronal section
(B) Frontal view
FIGURE 2-8 Coronal Brain Section (A) The brain is cut down the
middle parallel to the front of the body; then a coronal section is viewed at a
slight angle. This frontal view (B) displays white matter, gray matter, and the
lateral ventricles. Visible above the ventricles, a large bundle of fibers, the
corpus callosum, joins the hemispheres.
Stroke
Approximately every minute in the United States, someone has a
stroke with obvious visible symptoms—more than a half million
every year. Worldwide, stroke is the second leading cause of death.
Acute symptoms include facial droop, motor weakness in limbs,
visual disturbance, speech difficulties, and sudden onset of severe
headache.
In addition to visible strokes, at least twice as many silent strokes
may occur each year. These ministrokes occur primarily in the white
matter and do not produce obvious symptoms. (To view a brief video
on silent stroke, go to https://www.youtube.com/watch?
v=J3fb0CaDpEk ).
Even with the best, fastest medical attention, most stroke patients
have some residual motor, sensory, or cognitive deficit. According to
the Canadian Stroke Network, for every 10 people who have a
stroke, 2 die, 6 are disabled to varying degrees, and 2 recover to a
degree but still have a diminished quality of life. Of those who
survive, 1 in 10 risk further stroke.
The consequences of stroke are significant for those who have
them, their family, and their lifestyle. Consider Mr. Anderson, a 45-
year-old electrical engineer, who took his three children to the
movies one Saturday afternoon in 1998 and collapsed. He had a
massive stroke of the middle cerebral artery in his left hemisphere.
The stroke has impaired Mr. Anderson’s language ever since, and
because the brain’s connections are crossed, his right-side motor
control as well.
Seven years after his stroke, Mr. Anderson remained unable to
speak, but he understood simple conversations. Severe difficulties in
moving his right leg required him to use a walker. He could not
move the fingers of his right hand and so had difficulty feeding
himself, among other tasks. Mr. Anderson will probably never return
to his engineering career or drive or get around on his own.
Like him, most stroke survivors require help to perform everyday
tasks. Caregivers are often female relatives who give up their own
career and other pursuits. Half of the caregivers develop emotional
illness, primarily depression or anxiety or both, in a year or so. Lost
income and stroke-related medical bills significantly affect the
family’s living standard.
We tend to speak of stroke as a single disorder, but two major
types of strokes have been identified. In the more common and often
less severe ischemic stroke, a blood vessel is blocked, as by a clot.
The more severe hemorrhagic stroke results from a burst vessel
bleeding into the brain.
The hopeful news is that ischemic stroke can be treated acutely
with a drug called tissue plasminogen activator (t-PA), which breaks
up clots and allows normal blood flow to return to an affected
region. Unfortunately, there is no treatment for hemorrhagic stroke,
for which the use of clot-preventing t-PA would be disastrous.
Figure 2-10B clearly shows that the cortex covers the cerebral
hemispheres above the corpus callosum; below it are various internal
subcortical regions. The brainstem is a subcortical structure that
generally controls basic physiological functions. But many subcortical
regions are forebrain structures intimately related to the cortical areas
that process motor, sensory, perceptual, and cognitive functions. This
relation between the cortex and the subcortex alerts us to the concept that
redundant functions exist at many levels of nervous system organization.
Principle 4: The CNS functions on multiple levels.
If you were to compare medial views of the left and right
hemispheres, you would be struck by their symmetry. The brain, in fact,
has two of nearly every structure, one on each side. The few one-of-a-
kind structures, such as the third and fourth ventricles, lie along the
brain’s midline (see Figure 2-9B ). Another one-of-a-kind structure is the
pineal gland, which Descartes declared the seat of the mind in his
dualistic theory of how the brain works.
Principle 5: The brain is symmetrical and asymmetrical.
Microscopic Inspection: Cells and Fibers
The brain’s fundamental units—its cells—are so small that they can be
viewed only with the aid of a microscope. A microscope quickly reveals
that the brain has two main types of cells, illustrated in Figure 2-11 .
Neurons carry out the brain’s major functions, whereas glial cells aid and
modulate the neurons’ activities—for example, by insulating them. Both
neurons and glia come in many forms, each marked by the work that
they do.
Human brains contain about 86 billion neurons and 87 billion glia.
Section 3-1 examines their structures and functions in detail.
We can see the brain’s internal structures in even greater detail by
dyeing their cells with special stains (Figure 2-12 ). For example, if we
use a dye that selectively stains cell bodies, we can see that the neurons
in the cortical gray matter lie in layers, revealed by the bands of tissue in
Figure 2-12A and C. Each layer contains cells that stain
characteristically. Figure 2-12A and B shows that stained subcortical
regions are composed of clusters, or nuclei, of similar cells.
nuclei (sing. nucleus) A group of cells forming a cluster that can be
identified with special stains to form a functional grouping.
FIGURE 2-11 Brain Cells Branches emanate from the cell bodies of a
prototypical neuron (left) and a glial cell (right). This branching
organization increases the cell’s surface area. This type of neuron is called
a pyramidal cell because the cell body is shaped somewhat like a pyramid;
the glial cell is called an astrocyte because of its star-shaped appearance.
FIGURE 2-12 Cortical Layers and Glia Brain sections from the
left hemisphere of a monkey (midline is to the left in each image), viewed
through a microscope. Cells are stained with (A and c) a selective cell
body stain for neurons (gray matter) and (B and D) a selective fiber stain
for insulating glial cells, or myelin (white matter). The images reveal very
different views of the brain at the macro (A and B) and microscopic (C and
D) levels.
Development
The developing brain, which is less complex than the mature adult brain,
provides a clear picture of its basic structural plan. The striking
biological similarity of embryos as diverse as amphibians and mammals
is evident in the earliest stages of development. In the evolution of
complex nervous systems in vertebrate species, simpler and
evolutionarily more primitive forms have not been discarded and
replaced but rather added to. As a result, all anatomical and functional
features of simpler nervous systems are present in and form the base for
the most complex nervous systems, including ours.
The bilaterally symmetrical nervous system of simple worms, for
example, is common to complex nervous systems. Indeed, we can
recognize in humans the spinal cord that constitutes most of the simplest
fishes’ nervous system. The same is true of the brainstem of more
complex fishes, amphibians, and reptiles. The neocortex, although
particularly complex in humans, is clearly the same organ found in other
mammals.
Section 1-3 outlines nervous system evolution and Section 8-1 ,
developmental similarities among humans and other species.
Results
1. The demonstrator animal quickly learns to distinguish between the
colored balls.
2. When placed in isolation and tested later, the observer animals
selected the same object as the demonstrators, responded faster, and
performed the task correctly for 5 days without significant error.
Conclusion: Invertebrates display intelligent behavior, such as learning
by observation.
Research from G. Fiorito and P. Scotto (1992). Observational learning in Octopus vulgaris.
Science, 256, 545–547.
2-2 REVIEW
The Nervous System’s Evolutionary Development
Before you continue, check your understanding.
1 . The brains of vertebrate animals have evolved into three regions:
___________, ___________, and ___________.
2 . The functional levels of the nervous system interact, each region
contributing different aspects, or dimensions, to produce
___________.
3 . In a brief paragraph, explain how the evolution of the forebrain in
mammals reinforces the principle that the CNS functions on multiple
levels.
Answers appear at the back of the book.
Mediating Behavior
When we look under the hood, we can make some pretty good guesses
about what each part of a car engine does. The battery must provide
electrical power to run the radio and lights, for example, and because
batteries have to be charged, the engine must contain some mechanism
for charging them. We can take the same approach to deduce how the
parts of the brain function. The part connected to the optic nerve coming
from each eye must have something to do with vision. Structures
connected to the auditory nerve coming from each ear must have
something to do with hearing.
From such simple observations, we can begin to understand how the
brain is organized. The real test comes in analyzing actual brain function:
how this seeming jumble of parts produces behaviors as complex as
human thought. The place to start is the brain’s functional anatomy:
learning the name of a particular CNS structure is pointless without also
learning something about what it does. We focus now on the names and
functions of the three major CNS components: spinal cord, brainstem,
and forebrain.
Spinal Cord
Although producing movement is the brain’s principal function,
ultimately the spinal cord executes most body movements, usually
following instructions from the brain but at times acting independently
via the somatic nervous system. To understand how important the spinal
cord is, think of the old saying “running around like a chicken with its
head cut off.” When a chicken’s head is lopped off for the farmer’s
family dinner, the chicken is still capable of running around the barnyard
until it collapses from loss of blood. The chicken accomplishes this feat
because the spinal cord is acting independently of the brain.
Grasping the spinal cord’s complexity is easier once you realize that it
is not a single structure but rather a set of segmented switching stations.
As detailed in Section 2-4 , each spinal segment receives information
from a discrete part of the body and sends out commands to that area.
Spinal nerves, which are part of the SNS, carry sensory information to
the cord from the skin, muscles, and related structures and in turn send
motor instructions to control each muscle.
Brainstem
The brainstem begins where the spinal cord enters the skull and extends
upward into the lower areas of the forebrain. The brainstem receives
afferent nerves coming in from all of the body’s senses, and it sends
efferent nerves out to the spinal cord to control virtually all of the body’s
movements except the most complex movements of the fingers and toes.
The brainstem, then, both directs movements and creates a sensory
world.
brainstem Central structure of the brain, including the hindbrain,
midbrain, thalamus, and hypothalamus, that is responsible for most
unconscious behavior.
Alphabetically, afferent comes before efferent : sensory signals must
enter the brain before an outgoing signal triggers a motor response.
In some vertebrates, such as frogs, the entire brain is largely
equivalent to the mammalian or avian brainstem. And frogs get along
quite well, demonstrating that the brainstem is a fairly sophisticated
piece of machinery. If we had only a brainstem, we would still be able to
construct a world, but it would be a far simpler sensorimotor world, more
like the world a frog experiences.
The brainstem, which is responsible for most unconscious behavior,
can be divided into three regions: hindbrain, midbrain, and diencephalon,
meaning between brain because it borders the brain’s upper and lower
parts. In fact, the between-brain status of the diencephalon can be seen in
a neuroanatomical inconsistency: some anatomists place it in the
brainstem and others place it in the forebrain. Figure 2-15 A illustrates
the location of these three brainstem regions under the cerebral
hemispheres. Figure 2-15B compares the shape of the brainstem regions
to the lower part of your arm when held upright. The hindbrain is long
and thick like your forearm, the midbrain is short and compact like your
wrist, and the diencephalon at the end is bulbous like a fist.
The hindbrain and midbrain are essentially extensions of the spinal
cord; they developed first as vertebrate animals evolved a brain at the
anterior end of the body. It makes sense, therefore, that these lower
brainstem regions should retain a division between structures having
sensory functions and those having motor functions, with sensory
structures lying dorsal and motor ones ventral, or in upright humans,
posterior and anterior.
Principle 7: Sensory and motor divisions permeate the nervous
system.
Each brainstem region performs more than a single task. Each
contains various groupings of nuclei that serve various purposes. All
three regions, in fact, have both sensory and motor functions. However,
the hindbrain is especially important in motor functions, the midbrain in
sensory functions, and the diencephalon in integrative sensorimotor
tasks. Here we consider the central functions of these three regions; later
chapters contain more detailed information about them.
(A)
(B)
FIGURE 2-15 Brainstem Structures (A) Medial view shows the
relation of the brainstem to the cerebral hemispheres. (B) The shapes and
relative sizes of the brainstem’s three parts are analogous to your fist,
wrist, and forearm.
(A)
(B)
FIGURE 2-16 The Cerebellum and Movement (A) Their
relatively large cerebellum enables finely coordinated movements such as
flying and landing in birds and pouncing on prey in cats. Slow-moving
animals such as the sloth have a smaller cerebellum relative to brain size.
(B) Like the cerebrum, the human cerebellum has left and right
hemispheres, an extensively folded cortex with gray and white matter, and
subcortical nuclei.
Hindbrain
The hindbrain controls motor functions ranging from breathing to
balance to fine movements, such as those used in dancing. Its most
distinctive structure, and one of the largest in the human brain, is the
cerebellum. Its relative size increases with the physical speed and
dexterity of a species, as shown in Figure 2-16 A .
hindbrain Evolutionarily the oldest part of the brain; contains the
pons, medulla, reticular formation, and cerebellum, structures that
coordinate and control most voluntary and involuntary movements.
Animals that move relatively slowly (such as a sloth) have a relatively
small cerebellum for their body size. Animals that can perform rapid
acrobatic movements (such as a hawk or a cat) have a very large
cerebellum relative to overall brain size. The human cerebellum, which
resembles a cauliflower in the medial view in Figure 2-16B , likewise is
important in controlling complex movements. But cerebellar size in
humans is also related to cognitive capacity. Relative to other mammals,
apes show an expansion of the cerebellum that correlates with increased
capacity for planning and executing complex behavioral sequences,
including tool use and language (see Barton, 2012).
As we look beyond the cerebellum at the rest of the hindbrain, shown
in Figure 2-17 , we find three subparts: the reticular formation, the pons,
and the medulla. Extending the length of the entire brainstem at its core,
the reticular formation is a netlike mixture of neurons (gray matter) and
nerve fibers (white matter). This nerve net gives the structure the mottled
appearance from which its name derives (from Latin rete, meaning net ).
The reticular formation’s nuclei are localized into small patches along its
length. Each has a special function in stimulating the forebrain, such as
in waking from sleep.
reticular formation Midbrain area in which nuclei and fiber
pathways are mixed, producing a netlike appearance; associated
with sleep–wake behavior and behavioral arousal.
FIGURE 2-17 Hindbrain The principal hindbrain structures integrate
voluntary and involuntary body movements. The reticular formation is
sometimes called the reticular activating system.
The pons and medulla contain substructures that control many vital
body movements. Nuclei in the pons receive inputs from the cerebellum
and actually form a bridge from it to the rest of the brain (in Latin, pons
means bridge ). At the rostral tip of the spinal cord, the medulla’s nuclei
regulate such vital functions as breathing and the cardiovascular system.
For this reason, a blow to the back of the head can kill you: your
breathing stops if the hindbrain control centers are injured.
Midbrain
In the midbrain, a sensory component, the tectum (roof), is dorsal
(posterior in upright humans), whereas a motor structure, the
tegmentum (floor), is ventral (anterior in humans; Figure 2-18 A ). The
tectum receives a massive amount of sensory information from the eyes
and ears. The optic nerve sends a large bundle of fibers to the superior
colliculus, whereas the inferior colliculus receives much of its input from
auditory pathways. The colliculi function not only to process sensory
information but also to produce orienting movements related to sensory
inputs, such as turning your head to see a sound’s source.
midbrain Central part of the brain; contains neural circuits for
hearing and seeing as well as orienting movements.
tectum Roof (area above the ventricle) of the midbrain; its functions
are sensory processing, particularly visual and auditory, and the
production of orienting movements.
tegmentum Floor (area below the ventricle) of the midbrain; a
collection of nuclei with movement-related, species-specific, and
pain perception functions.
orienting movement Movement related to sensory inputs, such as
turning the head to see the source of a sound.
FIGURE 2-18 Midbrain (A) Structures in the midbrain are critical for
producing orienting movements, species-specific behaviors, and pain
perception. (B) The tegmentum in cross section, revealing various nuclei.
Colliculus comes from collis, Latin for hill. The colliculi resemble four little
hills on the midbrain’s posterior surface.
Forebrain
The largest and most recently evolved region of the mammalian brain is
the forebrain. Its major internal and external structures are shown in
Figure 2-20 . Each of its three principal structures has multiple
functions. To summarize briefly, the cerebral cortex regulates a host of
mental activities ranging from perception to planning; the basal ganglia
control voluntary movement; and the limbic system regulates emotions
and behaviors that produce and require memory.
forebrain Evolutionarily the newest part of the brain; coordinates
advanced cognitive functions such as thinking, planning, and
language; contains the limbic system, basal ganglia, and neocortex.
Cerebral Cortex
The forebrain contains two types of cortex, three- or four-layered and
six-layered. The six-layered neocortex (new bark ) is the tissue visible
when we view the brain from the outside, as in Figure 2-5 . The more
recently evolved neocortex is unique to mammals; its primary function is
to construct a perceptual world and respond to that world. The older,
more primitive three- or four-layered cortex, sometimes called
allocortex, lies adjacent to the neocortex. Allocortex is found in the
brains of other chordates in addition to mammals, especially in birds and
reptiles.
neocortex (cerebral cortex) Most recently evolved outer layer (new
bark ) of the forebrain, composed of about six layers of gray matter;
constructs our reality.
The allocortex plays a role in controlling motivational and emotional
states as well as in certain forms of memory. Although neocortex and
allocortex have anatomical and functional differences, those distinctions
are not critical for most discussions in this book. Therefore, we usually
refer to both types of tissue simply as cortex.
Measured by volume, the cortex makes up most of the forebrain,
constituting 80 percent of the human brain overall. It is the brain region
that has expanded the most in the course of mammalian evolution. The
human neocortex has a surface area as large as 2500 square centimeters
but a thickness of only 1.5 to 3.0 millimeters, an area equivalent to about
four pages of this book. By contrast, a chimpanzee has a cortical area
equivalent to about one page.
The pattern of sulci and gyri formed by the folding of the cortex
varies across species. Some, such as rats, have no sulci or gyri, whereas
in carnivores, such as cats, the gyri form a longitudinal pattern. In
primates, the sulci and gyri form a more diffuse pattern.
Monkey
Chimpanzee
Human
Cortical Lobes
To review, the human cortex consists of the nearly symmetrical left and
right hemispheres, which are separated by the longitudinal fissure,
shown at left in Figure 2-21 . As shown at right, each hemisphere is
subdivided into four lobes corresponding to the skull bones overlying
them: frontal, temporal, parietal, and occipital. Unfortunately, bone
location and brain function are unrelated. As a result, the cortical lobes
are rather arbitrarily defined anatomical regions that include many
functional zones.
Nevertheless, we can attach some gross functions to each lobe. The
three posterior lobes have sensory functions: the occipital lobe is visual;
the parietal lobe is tactile; and the temporal lobe is visual, auditory, and
gustatory. In contrast, the frontal lobe is motor and is sometimes called
the brain’s executive because it integrates sensory and motor functions
and formulates plans of action. We can also predict some effects of
injuries to each lobe:
Principle 9: Brain functions are localized and distributed.
• People with an injured occipital lobe have deficits in processing visual
information. Although they may perceive light versus dark, they may
be unable to identify either the shape or the color of objects.
• Injuries to the parietal lobe make it difficult to identify or locate
stimulation on the skin. Deficits in moving the arms and hands to
points in space may occur.
• Temporal lobe injuries result in difficulty recognizing sounds, although
unlike people with occipital injuries, those with temporal injury can
still recognize that they are hearing something. Temporal lobe injuries
can also produce difficulties in processing complex visual information,
such as faces.
• Individuals with frontal lobe injuries may have difficulties organizing
and evaluating their ongoing behavior as well as planning for the
future.
Fissures and sulci often establish the boundaries of cortical lobes
(Figure 2-21 , right). For instance, in humans, the central sulcus and
lateral fissure form the boundaries of each frontal lobe as well as the
boundaries of each parietal lobe lying posterior to the central sulcus. The
lateral fissure demarcates each temporal lobe, forming its dorsal
boundary. The occipital lobes are not so clearly separated from the
parietal and temporal lobes because no large fissure marks their
boundaries.
Anatomical features presented in Section 9-2 define occipital lobe
boundaries.
Dorsal view of brain
Basal Ganglia
A collection of nuclei that lie in the forebrain just below the white matter
of the cortex, the basal ganglia consist of three principal structures: the
caudate nucleus, the putamen, and the globus pallidus, all shown in
Figure 2-24 . Together with the thalamus and two closely associated
nuclei, the substantia nigra and subthalamic nucleus, the basal ganglia
form a system that functions primarily to control voluntary movement.
basal ganglia Subcortical forebrain nuclei that coordinate voluntary
movements of the limbs and body; connected to the thalamus and to
the midbrain.
We can observe the functions of the basal ganglia by analyzing the
behavior resulting from the many diseases that interfere with their
healthy functioning. Parkinson disease, a motor system disorder
characterized by severe tremors, muscular rigidity, and a reduction in
voluntary movement, is among the most common movement disorders
among the elderly. People with Parkinsonism take short, shuffling steps;
display bent posture; and may need a walker to get around. Many have
almost continuous hand tremors and sometimes head tremors as well.
Another disorder of the basal ganglia is Tourette syndrome,
characterized by various motor tics; involuntary vocalizations (including
curse words and animal sounds); and odd, involuntary body movements,
especially of the face and head.
Parkinson disease Disorder of the motor system correlated with a
loss of dopamine from the substantia nigra and characterized by
tremors, muscular rigidity, and a reduction in voluntary movement.
Tourette syndrome Disorder of the basal ganglia characterized by
tics, involuntary vocalizations (including curse words and animal
sounds), and odd, involuntary movements of the body, especially of
the face and head.
Details on Parkinson disease appear in Focus boxes 5-2, 5-3, and 5-
4, Sections 11-3 and 16-3 . Focus 11-4 details Tourette syndrome.
Neither Parkinsonism nor Tourette syndrome is a disorder of
producing movements, as in paralysis. Rather, they are disorders of
controlling movements. The basal ganglia, therefore, must play a critical
role in controlling and coordinating movement patterns rather than in
activating the muscles to move.
Limbic System
In the 1930s, psychiatry was dominated by the theories of Sigmund
Freud, who emphasized the roles of sexuality and emotion in human
behavior. At the time, the brain regions controlling these behaviors had
not been identified, and coincidentally, a group of brain structures
collectively called the limbic lobe had no known function. It was a
simple step to thinking that perhaps the limbic structures played a central
role in sexuality and emotion.
One sign that this hypothesis might be correct came from James
Papez (1937), who discovered that people with rabies have infection in
the limbic structures, and one symptom of rabies is heightened
emotionality. We now know that such a simple view is inaccurate. In
fact, the limbic system is not a unitary system at all. And although some
limbic structures have roles in emotional and sexual behaviors, limbic
structures serve other functions too, including contributing to memory
and motivation.
limbic system Disparate forebrain structures lying between the
neocortex and the brainstem that form a functional system
controlling affective and motivated behaviors and certain forms of
memory; includes cingulate cortex, amygdala, and hippocampus,
among other structures.
Figure 2-25 diagrams the principal limbic structures Papez proposed.
The hippocampus, cingulate cortex (a type of allocortex), and associated
structures participate in certain memory functions as well as in
controlling navigation in space. Many limbic structures, in particular the
amygdala, are also believed to contribute to the rewarding properties of
psychoactive drugs and other potentially addictive substances and
behaviors. Repeated exposure to drugs such as amphetamine or nicotine
produces both chemical and structural changes in the cingulate cortex
and hippocampus, among other structures.
FIGURE 2-25 Limbic System This medial view of the right hemisphere
illustrates the principal structures proposed by Papez to constitute the
limbic system. This structure participates in emotional and sexual
behaviors, motivation, and memory. For a contemporary view of limbic
anatomy, see Figure 12-18 .
Olfactory System
At the very front of the brain lie the olfactory bulbs, the organs
responsible for our sense of smell. The olfactory system is unique among
human senses, as Figure 2-26 shows, because it is almost entirely a
forebrain structure. The other sensory systems project most of their
inputs from the sensory receptors to the midbrain and thalamus.
Olfactory input takes a less direct route: the olfactory bulb sends most of
its inputs to a specialized region, the pyriform cortex, on the brain’s
ventral surface. From there, sensory input progresses to the amygdala
and the dorsomedial thalamus (see Figure 2-19 , right), which routes it to
the frontal cortex.
FIGURE 2-26 Sense of Smell Our small olfactory bulb lies at the base
of the forebrain, connects to receptor cells that lie in the nasal cavity, and
sends most of this input to the pyriform cortex en route to the amygdala
and thalamus.
Transmitting Information
The SNS is monitored and controlled by the CNS—the cranial nerves by
the brain and the spinal nerves by the spinal cord segments.
Cranial Nerves
The linkages provided by the cranial nerves between the brain and
various parts of the head and neck as well as various internal organs are
illustrated and tabulated in Figure 2-27 . Cranial nerves can have afferent
functions, such as sensory inputs to the brain from the eyes, ears, mouth,
and nose, or they can have efferent functions, such as motor control of the
facial muscles, tongue, and eyes. Some cranial nerves have both sensory
and motor functions, such as modulation of both sensation and movement
in the face.
cranial nerve One of a set of 12 nerve pairs that control sensory and
motor functions of the head, neck, and internal organs.
The 12 pairs of cranial nerves are known both by their numbers and by
their names, as listed in Figure 2-27 . One set of 12 controls the left side
of the head, whereas the other set controls the right side. This
arrangement makes sense for innervating duplicated parts of the head
(such as the eyes), but why separate nerves should control the right and
left sides of a singular structure (such as the tongue) is not so clear. Yet
that is how the cranial nerves work. If you have ever received lidocaine
(often called Novocaine) for dental work, you know that usually just one
side of your tongue becomes numb because the dentist injects the drug
into only one side of your mouth. The rest of the skin and muscles on
each side of the head are similarly controlled by cranial nerves located on
the same side.
We consider many cranial nerves in detail in later chapters’
discussions on topics such as vision, hearing, olfaction, taste, and stress
responses. For now, you simply need to know that cranial nerves form
part of the SNS, providing inputs to the brain from the head’s sensory
organs and muscles and controlling head and facial movements. Some
cranial nerves also contribute to maintaining autonomic functions by
connecting the brain and internal organs (the vagus, cranial nerve 10) and
by influencing other autonomic responses, such as salivation.
Spinal Nerves
The spinal cord lies inside the bony spinal column, which is made up of a
series of small bones called vertebrae (sing. vertebra ), categorized into
five anatomical regions from top to bottom: cervical, thoracic, lumbar,
sacral, and coccygeal, as diagrammed in Figure 2-28 A . You can think of
each vertebra in these five groups as a short segment of the spinal
column. The corresponding spinal cord segment in each vertebral region
functions as that segment’s minibrain.
vertebrae (sing. vertebra) The bones that form the spinal column.
2 Optic Vision
This arrangement may seem a bit odd, but it has a long evolutionary
history. Think of a simpler animal, such as a snake, that evolved long
before humans did. A snake’s body is a segmented tube. In that tube is
another tube, the spinal cord, which also is segmented. Each of the
snake’s nervous system segments receives nerve fibers from sensory
receptors in the part of the body adjacent to it, and that nervous system
segment sends fibers back to the muscles in that body part. Each segment,
therefore, works independently.
A complication arises in animals such as humans, whose limbs may
originate at one spinal segment level but extend past other segments of
the spinal column. Your shoulders, for example, may begin at C5
(cervical segment 5), but your arms hang down well past the sacral
segments. So unlike the snake, which has spinal cord segments that
connect to body segments fairly directly adjacent to them, human body’s
segments fall schematically into more of a patchwork pattern, as shown in
Figure 2-28B . This arrangement makes sense if the arms are extended as
they are when we walk on all fours.
Sections 11-1 and 11-4 review the spinal cord’s contributions to
movement and to somatosensation.
Regardless of their complex pattern, however, our body segments still
correspond to the spinal cord segments. Each of these body segments is
called a dermatome (meaning skin cut ). A dermatome has both a
sensory nerve to send information from the skin, joints, and muscles to
the spinal cord and a motor nerve to control the muscle movements in
that particular body segment.
dermatome Body segment corresponding to a segment of the spinal
cord.
These sensory and motor nerves, known as spinal (or peripheral )
nerves, are functionally equivalent to the cranial nerves of the head.
Whereas the cranial nerves receive information from sensory receptors in
the eyes, ears, facial skin, and so forth, the spinal nerves receive
information from sensory receptors in the rest of the body—that is, in the
PNS. Similarly, whereas the cranial nerves move the muscles of the eyes,
tongue, and face, the peripheral nerves move the muscles of the limbs and
trunk.
FIGURE 2-28 Spinal Segments and Dermatomes (A) Medial
view showing the five spinal cord segments: cervical (C), thoracic (T),
lumbar (L), sacral (S), and coccygeal. (B) Each segment corresponds to a
region of body surface (a dermatome) identified by the segment number
(e.g., C5 at the base of the neck and L2 in the lower back).
(B)
VideoSurgery/Getty Images
Function
The balance of the whole nervous system, of the functioning brain, and
of individual cells works in concert to produce behavior. Knowing the
parts of the nervous system and some general notions about what they do
is only the beginning. Learning how the parts work together allows us to
proceed to a closer look, in the chapters that follow, at how the brain
produces behavior.
Thus far, we have identified 10 principles related to the nervous
system’s functioning. Here we elaborate each one. As you progress
through the book, review these ideas regularly with an eye toward
understanding the concept rather than simply memorizing the principle.
Soon you will find yourself applying the principles of function as you
encounter new information about the brain and behavior.
(B)
8 The brain divides sensory input for object recognition and motor control.
A Genetic Diagnosis
Fraternal twins Alexis and Noah Beery seemingly acquired cerebral
palsy perinatally (at or near birth). They had poor muscle tone and
could barely walk or sit. Noah drooled and vomited, and Alexis had
tremors.
Typically, children with cerebral palsy, a condition featuring
perinatal brainstem damage, do not get worse with age, but the
twins’ condition deteriorated. Their mother, Retta Beery, observed as
well that Alexis’s symptoms fluctuated: they improved after she slept
or napped, for example.
Cell bodies
Axons
Scientists continue to fix, slice, and stain brain tissue and to improve
on ways of visualizing cells to provide insights into what cells do.
Visualization is aided not only by using dyes that either color an
individual cell completely; color some cellular components. such as its
proteins; or as described in Research Focus 3-2 , Brainbow: Rainbow
Neurons, color the cell only when it is engaged in a particular activity.
Scientists have developed other techniques of viewing living cells in
the nervous system or viewing cells that are cultured in a dish with
nurturing fluids. In doing so, they use staining techniques to produce an
image of the cell and to allow its activity to be viewed and controlled.
Investigators even implant tiny microscopes in the brain to view the
activity of its neurons (Chen et al., 2013). There remains, however, the
problem of making sense of what you see. Different brain samples can
yield different images, and different people can interpret the images in
different ways.
So began a controversy between two great scientists—the Italian
Camillo Golgi and the Spaniard Santiago Ramón y Cajal—that resulted in
defining neurons. Both men were awarded the Nobel Prize for medicine
in 1906. Imagine that you are Camillo Golgi, hard at work in your
laboratory staining and examining nervous system cells. You immerse a
thin slice of brain tissue in a solution containing silver nitrate and other
chemicals, a technique used at the time to produce black-and-white
photographs. A contemporary method, shown in Figure 3-1 A , produces
a color-enhanced microscopic image that resembles the images Golgi
saw.
(A)
Biophoto Associates/Science Source/Photo Researchers
(B)
FIGURE 3-1Two Views of a Cell (A) Tissue preparation revealing human
pyramidal cells stained using the Golgi technique. (B) Cajal’s drawing of a single Purkinje
neuron made from Golgi-stained tissue. (B) Drawing from Histologie du système nerveux de l’homme et
des vertèbres, by S. Ramón y Cajal, 1909–1911, Paris: Maloine.
Brain Tumors
One day while watching a movie in a neuropsychology class, R. J., a
19-year-old college sophomore, collapsed on the floor, displaying
symptoms of a seizure. The instructor helped her to the university
clinic, where she recovered except for a severe headache. She
reported that she had repeated severe headaches.
A few days later, a computed tomography (CT) scan showed a
tumor over her left frontal lobe. The tumor was removed surgically,
and R. J. returned to classes after an uneventful recovery. Her
symptoms have not recurred.
A tumor is an uncontrolled growth of new tissue that is
independent of surrounding structures. No region of the body is
immune, but the brain is a site for the more than 120 kinds of
tumors. They are a common cause of brain cancer in children.
tumor Mass of new tissue that grows uncontrolled and
independent of surrounding structures.
The incidence of brain tumors in the United States is about 20 per
100,000, according to the Central Brain Tumor Registry of the
United States (Quinn et al., 2014). In adults brain tumors grow from
glia or other supporting cells rather than from neurons, but in infants,
tumors may grow from developing neurons. Rate of tumor growth
depends on the type of cell affected.
Some tumors are benign, as R. J.’s was, and not likely to recur
after removal. Others are malignant, likely to progress, to invade
other tissue, and apt to recur after removal. Both kinds can pose a
risk to life if they develop in sites from which they are difficult to
remove.
The earliest symptoms usually result from increased pressure on
surrounding brain structures. They can include headaches, vomiting,
mental dullness, changes in sensory and motor abilities, and seizures
such as R. J. had. Many symptoms depend on the tumor’s location.
The three major types of brain tumors are classified according to
how they originate:
1. Gliomas arise from glial cells. They are slow growing, not often
malignant, and relatively easy to treat if they arise from astrocytes.
Gliomas that arise from the precursor blast or germinal cells that
grow into glia are more often malignant, grow more quickly, and
often recur after treatment. U.S. Senator Edward Kennedy was
diagnosed with a malignant glioma in 2008 and died a year later.
As with R. J., his first symptom was an epileptic seizure.
2. Meningiomas, such as R. J.’s, attach to the meninges and so grow
entirely outside the brain, as shown in the accompanying CT scan.
These tumors are usually encapsulated (contained), and if the
tumor is accessible to surgery, chances of recovery are good.
3. Metastatic tumors become established when cells from one region
of the body transfer to another area (which is what metastasis
means). Typically, metastatic tumors are present in multiple
locations, making treatment difficult. Symptoms of the underlying
condition often first appear when the tumor cells reach the brain.
Astroglia
Astrocytes (star-shaped glia, shown in Table 3-1 ), also called astroglia,
provide structural support to the CNS. Their extensions attach to blood
vessels and to the brain’s lining, forming a scaffolding that holds neurons
in place. These same extensions provide pathways for certain nutrients to
move between blood vessels and neurons. Astrocytes also secrete
chemicals that keep neurons healthy and help them heal if injured.
astrocyte Star-shaped glial cell that provides structural support to
CNS neurons and transports substances between neurons and blood
vessels.
At the same time, astrocytes contribute to the structure of a protective
partition between blood vessels and the brain, the blood–brain barrier.
As shown in Figure 3-7 , the end feet of astrocytes attach to blood vessel
cells, causing the vessels to bind tightly together. These tight junctions
prevent an array of substances, including many toxins, from entering the
brain through the blood vessel walls.
blood–brain barrier Tight junctions between the cells that
compose blood vessels in the brain, providing a barrier to the entry
of an array of substances, including toxins, into the brain.
(B)
(C)
Ian Whishaw
FIGURE 3-8 Detecting Brain Damage (A) Arrows indicate the red nucleus in a
rat brain. (B) Close-up of cresyl violet–stained neurons, the large dark bodies, in the
healthy red nucleus. (C) After exposure to a neurotoxin, only microglia, the small dark
objects in the micrograph, survive.
2 Schwann cells first shrink and then divide, forming glial cells along the
axon’s former path.
3 The neuron sends out axon sprouts, one of which finds the Schwann-
cell path and becomes a new axon.
4 Schwann cells envelop the new axon, forming new myelin.
FIGURE 3-9 Neuron Repair Schwann cells aid the regrowth of axons
in the somatic nervous system.
Sections 11-1 and 11-4 detail causes of and treatments for spinal
cord injury.
When the CNS is damaged, as happens, for example, when the spinal
cord is cut, regrowth and repair do not occur, even though the distance
that damaged fibers must bridge is short. The oligodendrocytes that
myelinate CNS cells do not behave like PNS Schwann cells to encourage
brain repair. They may actually play a role in inhibiting neuron regrowth
(Rusielewicz et al., 2014). That recovery should take place in the PNS
but not in the CNS is puzzling. Regrowth in the CNS may not occur in
part because as neuronal circuits mature, they become exquisitely tuned
to mediate individualized behavior and so are protected from the
proliferation of new cells or the regrowth of existing cells.
3-1 REVIEW
Cells of the Nervous System
Before you continue, check your understanding.
1 . The two classes of nervous system cells are ___________, which in
humans number around ___________, and ___________, which in
humans number about ___________, reflecting the typical ratio.
2 . Neurons, the information-conducting units of the nervous system, act
either by ___________ or by ___________ one another through their
connecting synapses.
3 . The three types of neurons and their characteristic functions are
___________, which ___________; ___________, which
___________; and ___________, which ___________.
4 . The five types of glial cells are ___________, ___________,
___________, ___________, and ___________. Their functions
include ___________, ___________, ___________, ___________,
and ___________ neurons.
5 . What is the main obstacle to producing a robot with all of the
behavioral abilities displayed by a mammal?
Answers appear at the back of the book.
FIGURE 3-10Typical Nerve Cell This view of the outside and inside
of a neuron reveals its overall structure and internal organelles and other
components.
THE BASICS
Chemistry Review
The smallest unit of a protein or any other chemical substance is the
molecule. Molecules and the even smaller atoms of elements that
constitute them are the cellular factory’s raw materials.
Elements, Atoms, and Ions
Chemists represent each element, a substance that cannot be broken
down into another substance, by a symbol—for example, O for
oxygen, C for carbon, and H for hydrogen. The 10 elements listed
below in the table Chemical Composition of the Brain constitute
virtually the entire makeup of an average living cell. Many other
elements are vital to the cell but present only in minute quantities.
The smallest quantity of an element that retains the properties of
that element is an atom. Ordinarily, as shown opposite in part A of
the figure Ion Formation, atoms are electrically neutral: their total
positive and negative charges are equal.
Atoms of chemically reactive elements such as sodium and
chlorine can easily lose or gain negatively charged particles, or
electrons. When an atom gives up electrons, it becomes positively
charged; when it takes on extra electrons, it becomes negatively
charged, as illustrated in part B of Ion Formation. Either way, the
charged atom is now an ion. Ions’ positive or negative charges
allow them to interact. This property is central to cell function.
Chemical Composition of the Brain
Ions Critical to Neuronal Communication
(A) Atoms
Ion Formation
Total positive (+) and negative (–) charges in atoms are equal.
The nucleus contains neutrons (no charge) and protons (positive
charge). Orbiting the nucleus are electrons (negative charge).
This static picture of chromosomes does not represent the way they
look in living cells. Videos of the cell nucleus show that chromosomes
are constantly changing shape and moving in relation to one another,
jockeying to occupy the best locations within the nucleus. By changing
shape, chromosomes expose different genes to the surrounding fluid,
thus allowing the gene to begin the process of making a protein.
Chromosome means colored body ; they are so named because
chromosomes can be readily stained with certain dyes.
A human somatic (body) cell has 23 pairs of chromosomes, or 46
chromosomes in all (in contrast, the 23 chromosomes within a
reproductive cell are not paired). Each chromosome is a double-stranded
molecule of deoxyribonucleic acid (DNA). The two strands of a DNA
molecule coil around each other, as shown in Figure 3-12 .
Each strand possesses a variable sequence of four nucleotide bases,
the constituent molecules of the genetic code: adenine (A), thymine (T),
guanine (G), and cytosine (C). Adenine on one strand always pairs with
thymine on the other, whereas guanine on one strand always pairs with
cytosine on the other. The two strands of the DNA helix are bound
together by the attraction between the two bases in each pair, as
illustrated in Figure 3-12 . Sequences of hundreds of nucleotide bases
within the chromosomes spell out the genetic code. Scientists represent
this code by the letters of the nucleotide bases, for example ATGCCG
and so forth.
A gene is a segment of a DNA strand. A gene’s code is its sequence
of thousands of nucleotide bases. Much as a sequence of letters spells out
a word, the sequence of ACTG base pairs spells out the order in which
amino acids, the constituent molecules of proteins, should be assembled
to construct a certain protein. To begin to make a protein, the appropriate
gene segment of the DNA strands first unwinds to expose its bases. The
exposed sequence of nucleotide bases on one of the DNA strands then
serves as a template to attract free-floating molecules called nucleotides.
The nucleotides, once attached, form a complementary strand of
ribonucleic acid (RNA), the single-stranded nucleic acid molecule
required for protein synthesis. This process, called transcription, is
shown in steps 1 and 2 of Figure 3-13 . (To transcribe means to copy, as
in copying part of a message you receive in a text.)
FIGURE 3-13 Protein Synthesis Information in a cell flows from DNA
to mRNA to protein (peptide chain).
A protein’s shape and its ability to change shape and to combine with
other proteins are central to its function. Through their shapes and
changes in shape, proteins can combine with other proteins in chemical
reactions. They can modify the length and shape of other proteins and so
act as enzymes. Proteins embedded in a cell membrane form
passageways called channels, gates, and pumps that regulate the flow of
substances through the membrane. And proteins can be exported through
the axon terminal to travel to other cells and so act as messenger
molecules.
(C) Pump
FIGURE 3-19 Transmembrane Proteins Channels, gates, and
pumps are proteins embedded in the cell membrane.
3-2 REVIEW
Internal Structure of a Cell
Before you continue, check your understanding.
1 . The constituent parts of the cell include the ___________,
___________, ___________, ___________, ___________, and
___________.
2 . The product of the cell is ___________. They serve many functions,
including acting at the cell membrane as ___________, ___________,
and ___________ to regulate movement of substances across the
membrane.
3 . The basic sequence of events in building a protein is that
___________ makes ___________ makes ___________.
4 . Once proteins are formed in the ___________, they are wrapped in
membranes by ___________ and transported by ___________ to their
designated sites in the neuron or its membrane or exported from the
cell by ___________.
5 . Why is a cell more than a protein factory?
Answers appear at the back of the book.
Al Lamme/Phototake
In this micrograph a sickle cell is surrounded by healthy blood cells.
Huntington Disease
Woody Guthrie, whose protest songs made him a spokesman for farm
workers during the Great Depression of the 1930s, is revered among
the founders of American folk music. His best-known song is “This
Land Is Your Land.” Singer and songwriter Bob Dylan was
instrumental in reviving Guthrie’s popularity in the 1960s.
Guthrie died in 1967 after struggling with what was eventually
diagnosed as Huntington disease. His mother had died of a similar
condition, although her illness was never diagnosed. Two of Guthrie’s
five children from two marriages developed the disease, and his
second wife, Marjorie, became active in promoting its study.
Huntington disease is devastating, characterized by memory
impairment; choreas (abnormal, uncontrollable movements); and
marked changes in personality, eventually leading to nearly total loss
of healthy behavioral, emotional, and intellectual functioning. Even
before the onset of motor symptoms, Huntington disease impairs
theory of mind, a person’s ability to assess the behavior of others
(Eddy and Rickards, 2015).
The symptoms of Huntington disease result from neuronal
degeneration in the basal ganglia and cortex. Symptoms can appear at
any age but typically start in midlife. In 1983, the HTT (huntingtin)
gene responsible for forming the abnormal huntingtin protein was
found on chromosome 4.
The HTT gene has been a source of insights into the transmission
of genetic disorders. Part of the gene contains repeats of the base
sequence CAG. The CAG codon encodes the amino acid glutamine.
If the number of CAG repeats exceeds about 40, then the carrier, with
40 or more glutamine amino acids in the huntingtin protein, has an
increased likelihood of Huntington symptoms.
As the number of CAG repeats increases, the onset of symptoms
occurs earlier in life, and the disease progresses more rapidly.
Typically, non-Europeans have fewer repeats than do Europeans,
among whom the disease is more common. The number of repeats
can also increase with transmission from the father but not from the
mother.
Investigations into why brain cells change in Huntington disease
and into potential treatments use transgenic animal models. Mice,
rats, and monkeys that have received the HTT gene feature the
abnormal huntingtin protein and display symptoms of Huntington
disease (Gu et al., 2015).
© MixPix/Alamy
Woody Guthrie, whose unpublished lyrics and artwork are
archived at woodyguthrie.org.
Chromosome Abnormalities
Genetic disorders are not caused solely by single defective alleles. Some
nervous system disorders are caused by copy number variations, that is,
aberrations in a part of a chromosome or even an entire chromosome.
Copy number variations are related to a variety of disorders, including
autism, schizophrenia, and learning disabilities. Often though, copy
number variation has little obvious consequence or is beneficial. For
example, humans average about 6 copies of the AMY1 (amylase) gene but
may have as many as 15 copies. The gene is an adaptation that improves
the ability to digest starchy foods (Mimori et al., 2015).
One condition due to a change in chromosome number in humans is
Down syndrome, which affects approximately 1 in 700 children. Down
syndrome is usually the result of an extra copy of chromosome 21. One
parent (usually the mother) passes on two copies of chromosome 21 to the
child rather than the normal single chromosome. Combining these two
with one chromosome from the other parent yields three chromosomes
21, an abnormal number called a trisomy ( Figure 3-22 ).
Down syndrome Chromosomal abnormality resulting in intellecutal
impairment and other abnormalities, usually caused by an extra
chromosome 21.
Although chromosome 21 is the smallest human chromosome, its
trisomy can dramatically alter a person’s phenotype. People with Down
syndrome have characteristic facial features and short stature. They are
susceptible to heart defects, respiratory infections, and intellecutal
impairment. They are prone to developing leukemia and Alzheimer
disease. Although people with Down syndrome usually have a much
shorter than normal life span, some live to middle age or beyond.
Improved educational opportunities for children with Down syndrome
shows that they can learn to compensate greatly for their mental
disabilities.
BSIP/Getty Images
© Everett Collection Inc/Alamy
FIGURE 3-22 Chromosome Aberration Left: Down syndrome, also known as
trisomy 21, is caused by an extra chromosome 21 (colored red, bottom row at left). Right:
Chris Burke, the first person with Down syndrome to play a leading role, on the television
series Life Goes On in the 1990s, is now in his fifties. He performs as a lead singer in a
band.
Genetic Engineering
Despite advances in understanding gene structure and function, the gap in
understanding how genes produce behavior remains wide. To investigate
gene structure and behavior relations, geneticists have invented methods
to influence the traits genes express. This approach collectively defines
the science of genetic engineering. In its simplest forms, genetic
engineering entails manipulating a genome, removing a gene from a
genome, or modifying or adding a gene to the genome. Its techniques
include selective breeding, cloning, and transgenics.
Selective Breeding
The oldest means of influencing genetic traits is the selective breeding of
animals and plants. Beginning with the domestication of wolves into dogs
more than 30,000 years ago, humans have domesticated many animal
species by selectively breeding males and females that display particular
traits. The selective breeding of dogs, for example, has produced the
species with the most diverse traits of all animal species: breeds that can
run fast, haul heavy loads, retrieve prey, dig for burrowing animals, climb
rocky cliffs in search of sea birds, herd sheep and cattle, or sit on an
owner’s lap and cuddle. Selective breeding especially influences dogs’
sociability with humans (Persson et al., 2015).
Maintaining spontaneous mutations is one objective of selective
breeding. Using this method, researchers produce whole populations of
animals possessing some unusual trait that originally arose as an
unexpected mutation in only one individual or in a few animals. In
laboratory colonies of mice, for example, multiple spontaneous mutations
have been discovered and maintained to produce over 450 different
mouse strains.
Some strains of mice make abnormal movements, such as reeling,
staggering, and jumping. Other strains have diseases of the immune
system; others are blind or cannot hear. Some mice are smart; some mice
are not; some have big brains;, some, small; and many display distinctive
behavioral traits. Many such genetic variations can also be found in
humans. As a result, the neural and genetic bases of the altered behavior
in the mice can be studied systematically to understand and treat human
disorders.
Unlike other animals, humans can consent to experimental
procedures. Section 7-7 frames debates on the benefits and ethics of
conducting research using nonhuman animals.
Cloning
More direct approaches to manipulating the expression of genetic traits
include altering early embryonic development. One such method is
cloning —producing an offspring that is genetically identical to another
animal.
Sections 7-1 and 7-5 review genetic methods used in neuroscience
research.
To clone an animal, scientists begin with a cell nucleus containing
DNA, usually from a living animal donor, place it in an egg cell from
which the nucleus has been removed, and after stimulating the egg to start
dividing, implant the new embryo in the uterus of a female. Because each
individual animal that develops from these cells is genetically identical to
the donor, clones can be used to preserve valuable traits, to study the
relative influences of heredity and environment, or to produce new tissue
or organs for transplant to the donor. Dolly, a female sheep, was the first
cloned mammal.
A team of researchers in Scotland cloned Dolly in 1996. As an adult,
she mated and bore a lamb.
Cloning has matured from an experimental manipulation to a
commercial enterprise. The first horse to be cloned was Charmayne
James’s horse Scamper, the mount she rode to 11 world championships in
barrel racing. The first cat to be cloned, shown in Figure 3-23 , was
called Copycat. The first rare species cloned was an Asian gaur, an animal
related to the cow. Investigators anticipate that cloning will be used to
reanimate extinct animals, a process called de-extinction. They propose
using preserved cells from the extinct passenger pigeon or from frozen
carcasses of the mastodon (an enclosure to house a de-extinct mastodon
has been prepared in Russia), an extinct elephant species, to clone those
animals.
Photos used with permission from Texas A&M College of Veterinary Medicine & Biomedical
Sciences.
FIGURE 3-23 A Clone and Her Mom Copycat (left) and Rainbow (right), the cat
that donated the cell nucleus for cloning. Although the cats’ genomes are identical, their
phenotypes, including fur color, differ. One copy of the X chromosome is randomly
inactivated in each cell, which explains the color differences. Even clones are subject to
phenotypic plasticity: they retain the capacity to develop into more than one phenotype.
Transgenic Techniques
Transgenic technology enables scientists to introduce genes into an
embryo or remove genes from it. For example, introducing a new gene
can enable cows or goats to produce medicines in their milk. The
medicines can be extracted from the milk to treat human disease.
Transgenic techniques used to take a mouse gene that affords resistance to
tuberculosis and insert it into cows has increased their resistance to TB
(Wu et al., 2015).
Chimeric animals are composites formed when an embryo of one
species receives cells from a different species. The resulting animal has
cells with genes from both parent species and behaviors that are a product
of those gene combinations. The chimeric animal may display an
interesting mix of the parent species’ behaviors. For example, chickens
that received Japanese quail cells in early embryogenesis display some
aspects of quail crowing behavior rather than chicken crowing behavior—
evidence for the genetic basis of the bird’s vocalization (Balaban, 2005).
The chimeric preparation provides an investigative tool for studying the
neural basis of crowing, because quail neurons can be distinguished from
chicken neurons when examined under a microscope.
Chimerism is common in humans (Giorgi, 2015). Twin zygotes
(fertilized eggs) may fuse into a single individual, twins may exchange
cells through placental circulation, and the fetus and the mother may
exchange cells with one another. Even organ transplant or stem cell
recipients may incorporate transplanted cells into other organs.
In knock-in technology, a number of genes or a single gene from one
species is added to the genome of another species, passed along, and
expressed in subsequent generations of transgenic animals. Brainbow
technology, described in Research Focus 3-2 , applies the knock-in
technique. Another application is in the study and treatment of human
genetic disorders. For instance, researchers have introduced into a line of
mice and a line of Rhesus monkeys the human HTT gene that causes
Huntington disease (Gill and Rego, 2009; see Focus 3-4). The mice
express the abnormal allele and display humanlike Huntington symptoms.
This mouse line is being used to study possible therapies for Huntington
disease in humans.
transgenic animal Product of technology in which one or more
genes from one species is introduced into the genome of another
species to be passed along and expressed in subsequent generations.
Knockout technology is used to inactivate a gene, for example so that a
line of mice fails to express it. The mouse line can then be examined to
determine whether the targeted gene is responsible for a specific function
or a human disorder and to examine possible therapies. It may be possible
to knock out genes related to certain kinds of memory, such as emotional
memory, social memory, or spatial memory. Knockout technology is a
useful way of investigating the neural basis of memory as well as clinical
conditions associated with learning impairments (Kusakari et al., 2015).
The neural basis of memory is the topic of Section 14-3 .
Advances in Behavioral Biology Volume 42, 1994, pp. 125–133; Defects of the
Fetal Forebrain in Acallosal Mice; Douglas Wahlsten, Hiroki S. Ozaki, © 1994
Plenum Press, New York, figure 1, with permission of Springer Science+Business
Media
Epilepsy
J. D. worked as a disc jockey for a radio station and at parties in his
off-hours. One evening, he set up on the back of a truck at a rugby
field to emcee a jovial and raucous rugby party. Between musical
sets, he made introductions, told jokes, and exchanged toasts.
About one o’clock in the morning, J. D. suddenly collapsed,
making unusual jerky motions, then passed out. He was rushed to a
hospital emergency room, where he gradually recovered. The
attending physician noted that he was not intoxicated, released him
to his friends, and recommended a series of neurological tests for the
next day. Neuroimaging with state-of-the-art brain scans can usually
reveal brain abnormalities (Bano et al., 2011), but it did not do so in
J. D. ’s case.
AJPhoto/Science Source
The EEG detects electrical signals given off by the brain in
various states of consciousness, as explained in Sections 7-2
and 13-3 . Section 16-3 details the diagnosis and treatment of
epilepsy.
Nervous System
The first hints about how the nervous system conveys its messages came
in the eighteenth century, following the discovery of electricity. Early
discoveries about the nature of electricity quickly led to proposals that it
plays a role in conducting information in the nervous system. We
describe a few milestones that led from this idea to an understanding of
how the nervous system really conveys information. If you have a basic
understanding of how electricity works and how it is used to stimulate
neural tissue, read on. If you prefer to brush up on electricity and
electrical stimulation first, turn to The Basics: Electricity and Electrical
Stimulation on page 110 .
As you might imagine, Bartholow’s report was not well received! The
uproar after its publication forced him to leave Cincinnati. Despite the
questionable ethics of his experiment, Bartholow had demonstrated that
the brain of a conscious person could be stimulated electrically to produce
movement of the body.
By the 1960s, the scientific community had established ethical
standards for research on human and nonhuman subjects (see Section
7-7 ). Today, brain stimulation is standard in many neurosurgical
procedures (see Section 16-2 ).
Electrical Recording Studies
A less invasive line of evidence that information flow in the brain is
partly electrical came from the results of electrical recording experiments.
Richard Caton, a Scottish physician who lived a century ago, was the first
to measure the brain’s electrical currents with a sensitive voltmeter, a
device that measures the flow and the strength of electrical voltage by
recording the difference in electrical potential between two bodies. When
he placed electrodes on a human subject’s skull, Caton reported
fluctuations in his voltmeter recordings. Today, this type of brain
recording, the electroencephalogram (EEG), is a standard tool used,
among other things, to monitor sleep stages and to detect the excessive
neural synchrony that characterizes electrographic seizures, as described
in Clinical Focus 4-1 , Epilepsy.
voltmeter Device that measures the flow and the strength of
electrical voltage by recording the difference in electrical potential
between two bodies.
electroencephalogram (EEG) Graph that records electrical activity
from the brain and mainly indicates graded potentials of many
neurons.
Detail on these EEG applications appears in Sections 7-2 , 13-3 , and
16-3 .
These pioneering studies provided evidence that neurons send
electrical messages, but concluding that nerves and tracts carry the kind
of electrical current that powers your phone would be incorrect. Hermann
von Helmholtz, a nineteenth-century German scientist, stimulated a nerve
leading to a muscle and measured the time the muscle took to contract.
The nerve conducted information at only 30 to 40 meters per second,
whereas electricity flows along a wire about a million times faster.
Information flow in the nervous system, then, is much too slow to be a
flow of electricity (based on electrons). To explain the electrical signals of
a neuron, Julius Bernstein suggested in 1886 that neuronal chemistry
(based on ions) produces an electrical charge. He also proposed that the
charge can change and so act as a signal. Bernstein’s idea was that
successive waves of electrical change constitute the message conveyed by
the neuron.
Moreover, it is not the ions themselves that travel along the axon but
rather a wave of charge. To understand the difference, consider other
kinds of waves. If you drop a stone into a pool of still water, the contact
produces a wave that travels away from the site of impact, as shown in
Figure 4-2 . The water itself does not travel. Only the change in pressure
moves, shifting the height of the water surface and producing the wave
effect.
Similarly, when you speak, you induce pressure waves in air, and these
waves carry the sound of your voice to a listener. If you flick a towel, a
wave travels to the other end of the towel. Just as waves through the air
send a spoken message, Bernstein’s idea was that waves of chemical
change travel along an axon to deliver a neuron’s message.
(B)
(B)
Oscilloscope
Hodgkin and Huxley’s experiments were made possible by the invention
of the oscilloscope, a voltmeter with a screen sensitive enough to display
the minuscule electrical signals emanating from a nerve or neuron over
time (Figure 4-5 A ). As graphed in Figure 4-5B , the units used when
recording the electrical charge from a nerve or neuron are millivolts (mV;
1 mV is one-thousandth of a volt) and milliseconds (ms; 1 ms is one-
thousandth of a second).
oscilloscope Device that serves as a sensitive voltmeter by
registering changes in voltage over time.
Microelectrodes
The final device needed to measure a neuron’s electrical activity is an
electrode small enough to place on or in an axon—a microelectrode.
Microelectrodes can deliver an electrical current to a single neuron or
record from it. One way to make a microelectrode is to etch the tip of a
piece of thin wire to a fine point about 1 µm in size and insulate the rest
of the wire. The tip is placed on or in the neuron, as shown in the image at
left in Figure 4-6 A .
microelectrode A microscopic insulated wire or a saltwater-filled
glass tube whose uninsulated tip is used to stimulate or record from
neurons.
Microelectrodes can also be made from a thin glass tube tapered to a
very fine tip (Figure 4-6 A, right image). The tip of a hollow glass
microelectrode can be as small as 1 µm. When the glass tube is filled with
salty water, a conducting medium (through which an electrical current can
travel), it acts as an electrode. A wire in the salt solution connects the
electrode to either a stimulating or a recording device.
Microelectrodes can record from axons in many ways. The tip of a
microelectrode placed on an axon provides an extracellular measure of
the electrical current from a tiny part of the axon. The tip of one electrode
can be placed on the surface of the axon and the tip of a second electrode
can be inserted into the axon. This technique measures voltage across the
cell membrane.
A still more refined use of a glass microelectrode is to place its tip on
the neuron’s membrane and apply a little suction until the tip is sealed to a
patch of the membrane, as shown in Figure 4-6B . This technique,
analogous to placing the end of a soda straw against a piece of plastic
wrapping and sucking, allows a recording to be made from only the small
patch of membrane sealed to the microelectrode tip.
Using the giant axon of the squid, an oscilloscope, and
microelectrodes, Hodgkin and Huxley recorded the electrical voltage on
an axon’s membrane and explained the nerve impulse as changes in ion
concentration across the cell membrane. The basis of electrical activity in
nerves is the movement of intracellular and extracellular ions, which
carry positive and negative charges across the cell membrane. To
understand Hodgkin and Huxley’s results, you first need to understand the
principles underlying the movement of ions.
(A)
(B)
Resting Potential
Figure 4-9 shows how the voltage difference is recorded when one
microelectrode is placed on the outer surface of an axon’s membrane and
another is placed on its inner surface. In the absence of stimulation, the
difference is about 70 mV. Although the charge on the outside of the
membrane is actually positive, by convention it is given a charge of zero.
Therefore, the inside of the membrane at rest is –70 mV relative to the
extracellular side.
If we were to continue to record for a long time, the charge across the
unstimulated membrane would remain much the same. The charge can
change, given certain changes in the membrane, but at rest the difference
in charge on the inside and outside of the membrane produces an
electrical potential —the ability to use its stored power, analogous to a
charged battery. The charge is thus a store of potential energy called the
membrane’s resting potential.
resting potential Electrical charge across the insulating cell
membrane in the absence of stimulation; a store of potential energy
produced by a greater negative charge on the intracellular side
relative to the extracellular side.
FIGURE 4-9 Resting Potential The electrical charge across a resting
cell membrane stores potential energy.
We might use the term potential in the same way to talk about the
financial potential of someone who has money in the bank—the person
can spend the money at some future time. The resting potential, then, is a
store of energy that can be used later. Most of your body’s cells have a
resting potential, but it is not identical on every axon. Resting potentials
vary from –40 to –90 mV, depending on neuronal type and animal
species.
Four charged particles take part in producing the resting potential:
ions of sodium (Na+ ), potassium (K+ ), chloride (Cl– ), and large protein
molecules (A– ). These are the cations and anions, respectively, defined
in Section 4-1 . As Figure 4-10 shows, these charged particles are
distributed unequally across the axon’s membrane, with more protein
anions and potassium ions in the intracellular fluid and more chloride
and sodium ions in the extracellular fluid. How do the unequal
concentrations arise, and how does each contribute to the resting
potential?
Graded Potentials
The resting potential provides an energy store that can be used somewhat
like the water in a dam: small amounts can be released by opening gates
for irrigation or to generate electricity. If the concentration of any of the
ions across the unstimulated cell membrane changes, the membrane
voltage changes. These graded potentials are small voltage fluctuations
across the cell membrane.
graded potential Small voltage fluctuation across the cell
membrane.
Stimulating a membrane electrically through a microelectrode mimics
the way the membrane’s voltage changes to produce a graded potential in
the living cell. If the voltage applied to the inside of the membrane is
negative, the membrane potential increases in negative charge by a few
millivolts. As illustrated in Figure 4-13 A , it may change from a resting
potential of –70 mV to a slightly greater potential of –73 mV.
This change is a hyperpolarization because the charge (polarity) of
the membrane increases. Conversely, if positive voltage is applied inside
the membrane, its potential decreases by a few millivolts. As illustrated
in Figure 4-13B , it may change from, say, a resting potential of –70 mV
to a slightly lower potential of –65 mV. This change is a depolarization
because the membrane charge decreases. Graded potentials usually last
only milliseconds.
hyperpolarization Increase in electrical charge across a membrane,
usually due to the inward flow of chloride or sodium ions or the
outward flow of potassium ions.
depolarization Decrease in electrical charge across a membrane,
usually due to the inward flow of sodium ions.
Hyperpolarization and depolarization typically take place on the soma
(cell body) membrane and on neuronal dendrites. These areas contain
gated channels that can open and close, changing the membrane potential
as illustrated in Figure 4-13 . Three channels—for potassium, chloride,
and sodium ions—underlie graded potentials:
(A) Hyperpolarization
(B) Depolarization
FIGURE 4-13 Graded Potentials (A) Stimulation (S) that increases
relative membrane voltage produces a hyperpolarizing graded potential.
(B) Stimulation that decreases relative membrane voltage produces a
depolarizing graded potential.
Puffer Fish
Action Potential
Electrical stimulation of the cell membrane at resting potential produces
local graded potentials. An action potential is a brief but large reversal
in an axon membrane’s polarity (Figure 4-14 A ). It lasts about 1 ms.
The voltage across the membrane suddenly reverses, making the
intracellular side positive relative to the extracellular side, then abruptly
reverses again to restore the resting potential. Because the action
potential is brief, many action potentials can occur within a second, as
illustrated in Figure 4-14B and C, where the time scales are compressed.
action potential Large, brief reversal in the polarity of an axon
membrane.
An action potential occurs when a large concentration of first Na+ and
then K+ crosses the membrane rapidly. The depolarizing phase of the
action potential is due to Na+ influx, and the hyperpolarizing phase, to
K+ efflux. Sodium rushes in, then potassium rushes out. As shown in
Figure 4-15 , the combined flow of sodium and potassium ions underlies
the action potential.
An action potential is triggered when the cell membrane is
depolarized to about –50 mV. At this threshold potential, the membrane
charge undergoes a remarkable further change with no additional
stimulation. The relative voltage of the membrane drops to zero and
continues to depolarize until the charge on the inside of the membrane is
as great as +30 mV—a total voltage change of 100 mV. Then the
membrane potential reverses again, becoming slightly hyperpolarized—a
reversal of a little more than 100 mV. After this second reversal, the
membrane slowly returns to its resting potential at –70 mV.
threshold potential Voltage on a neural membrane at which an
action potential is triggered by the opening of sodium and potassium
voltage-sensitive channels; about –50 mV relative to extracellular
surround. Also called threshold limit.
(A)
(B)
(C)
Nerve Impulse
Suppose you place two recording electrodes at a distance from one
another on an axon membrane, then electrically stimulate an area
adjacent to one electrode. That electrode would immediately record an
action potential. A similar recording would register on the second
electrode in a flash. An action potential has arisen near this second
electrode also, even though it is some distance from the original point of
stimulation.
Is this second action potential simply an echo of the first that passes
down the axon? No, it cannot be, because the action potential’s size and
shape are exactly the same at the two electrodes. The second is not just a
faint, degraded version of the first but is equal in magnitude. Somehow
the full action potential has moved along the axon. This propagation of
an action potential along an axon is called a nerve impulse.
nerve impulse Propagation of an action potential on the membrane
of an axon.
Why does an action potential move? Remember that the total voltage
change during an action potential is 100 mV, far beyond the 20-mV
change needed to bring the membrane from its resting state of –70 mV to
the action potential threshold level of –50 mV. Consequently, the voltage
change on the part of the membrane where an action potential first
occurs is large enough to bring adjacent parts of the membrane to a
threshold of –50 mV.
When the membrane at an adjacent part of the axon reaches –50 mV,
the voltage-sensitive channels at that location pop open to produce an
action potential there as well. This second occurrence in turn induces a
change in the membrane voltage still farther along the axon, and so on
and on, down the axon’s length. Figure 4-19 illustrates this process. The
nerve impulse occurs because each action potential propagates another
action potential on an adjacent part of the axon membrane. The word
propagate means to give birth, and that is exactly what happens. Each
successive action potential gives birth to another down the length of the
axon.
FIGURE 4-19Propagating an Action Potential Voltage sufficient to open
+ +
Na and K channels spreads to adjacent sites of the axon membrane, inducing voltage-
sensitive gates to open. Here, voltage changes are shown on only one side of the
membrane.
To review glial cell types, appearance, and functions, see Table 3-1 .
But axons are not totally encased in myelin. Unmyelinated gaps
between successive glial cells are richly endowed with voltage-sensitive
channels. These tiny gaps in the myelin sheath, the nodes of Ranvier,
are sufficiently close to one another that an action potential at one node
can open voltage-sensitive gates at an adjacent node. In this way, a
relatively slow action potential jumps quickly from node to node, as
shown in Figure 4-21 . This flow of energy is called saltatory
conduction (from the Latin verb saltare, meaning to leap ).
node of Ranvier The part of an axon that is not covered by myelin.
saltatory conduction Fast propagation of an action potential at
successive nodes of Ranvier; saltatory means leaping.
Myelin has two important consequences for propagating action
potentials. First, propagation becomes energetically cheaper, since action
potentials regenerate only at the nodes of Ranvier, not along the axon’s
entire length. Action potential conduction in unmyelinated axons, by
contrast, has a significant metabolic energy cost (Crotty et al., 2006).
The second consequence is that myelin improves the action potential’s
conduction speed.
Jumping from node to node speeds the rate at which an action
potential can travel along an axon, because the current flowing within the
axon beneath the myelin sheath travels very fast. While the current
moves speedily, the voltage drops quickly over distance. But the nodes
of Ranvier are spaced ideally to ensure sufficient voltage at the next node
to regenerate the action potential. On larger, myelinated mammalian
axons, nerve impulses can travel at a rate as high as 120 meters per
second. On smaller, uninsulated axons they travel only about 30 meters
per second.
Spectators at sporting events sometimes initiate a wave that travels
around a stadium. Just as one person rises, the next person begins to rise,
producing the wave effect. This human wave is like conduction along an
unmyelinated axon. Now think of how much faster the wave would
complete its circuit around the field if only spectators in the corners rose
to produce it. This is analogous to a nerve impulse that travels by
jumping from one node of Ranvier to the next. The quick reactions that
humans and other mammals are capable of are due in part to this
saltatory conduction in their nervous system.
Neurons that send messages over long distances quickly, including
sensory and motor neurons, are heavily myelinated. If myelin is
damaged, a neuron may be unable to send any messages over its axons.
In multiple sclerosis (MS ), the myelin formed by oligodendroglia is
damaged, which disrupts the functioning of neurons whose axons it
encases. Clinical Focus 4-2 , Multiple Sclerosis on page 126 , describes
the course of the disease.
multiple sclerosis (Ms) Nervous system disorder resulting from the
loss of myelin around axons in the CNS.
FIGURE 4-21 Saltatory Conduction Myelinated stretches of axons
are interrupted by nodes of Ranvier, rich in voltage-sensitive channels. In
saltatory conduction, the action potential jumps rapidly from node to node.
CLINICAL FOCUS 4-2
Multiple Sclerosis
One day J. O., who had just finished university requirements to
begin work as an accountant, noticed a slight cloudiness in her right
eye. It did not go away when she wiped her eye. Rather, the area
grew over the next few days. Her optometrist suggested that she see
a neurologist, who diagnosed optic neuritis, an indication that can be
a flag for multiple sclerosis (MS).
MS is caused by a loss of myelin produced by oligodendroglia
cells in the CNS (see illustration). It disrupts the affected neurons’
ability to propagate action potentials via saltatory conduction. This
loss of myelin occurs in patches, and scarring frequently results in
the affected areas.
Eventually, a hard scar, or plaque, forms at the site of myelin loss.
(MS is called a sclerosis from the Greek word meaning hardness. )
Associated with the loss of myelin is impairment of neuron function,
causing characteristic MS symptoms of sensory loss and difficulty in
moving.
Fatigue, pain, and depression are commonly associated with MS.
Bladder dysfunction, constipation, and sexual dysfunction all
complicate it. MS, about twice as common in women as in men,
greatly affects a person’s emotional, social, and vocational
functioning.
Multiple sclerosis is the most common of nearly 80 autoimmune
diseases, conditions in which the immune system makes antibodies
to a person’s own body (Rezania et al., 2012). Although MS patients
are treated with anti-inflammatory agents, it has no cure as yet.
autoimmune disease Illness resulting from an abnormal
immune response by the body against substances and tissues
normally present in the body.
J. O.’s eye cleared over the next few months, and she had no
further symptoms until after the birth of her first child 3 years later,
when she felt a tingling in her right hand. The tingling spread up her
arm, until gradually she lost movement in the arm for 5 months.
Then J. O.’s arm movement returned. But 5 years later, after her
second child was born, she felt a tingling in her left big toe that
spread along the sole of her foot and then up her leg, eventually
leading again to loss of movement. J. O. received corticosteroid
treatment, which helped, but the condition rebounded when she
stopped treatment. Then it subsided and eventually disappeared.
Since then, J. O. has had no major outbreaks of motor
impairment, but she reports enormous fatigue, takes long naps daily,
and is ready for bed early in the evening. Her sister and a female
cousin have experienced similar symptoms, and recently a third
sister began to display similar symptoms in middle age. One of J.
O.’s grandmothers was confined to a wheelchair, although the source
of her problem was never diagnosed.
MS is difficult to diagnose. Symptoms usually appear in
adulthood, their onset is quite sudden and their effects can be swift.
Initial symptoms may be loss of sensation in the face, limbs, or body
or loss of control over movements or loss of both sensation and
control. Motor symptoms usually appear first in the hands or feet.
Early symptoms often go into remission and do not appear again
for years. In some forms, however, MS progresses rapidly over just a
few years until the person is bedridden.
MS is common in the most northern and most southern latitudes,
so it may be related to a lack of vitamin D, which is produced by the
action of sunlight on the skin. The disease may also be related to
genetic susceptibility, as is likely in J. O.’s case. Many MS patients
take vitamin D3 and vitamin B12 .
It has been suggested that blood flow from the brain is reduced in
MS, allowing a buildup of toxic iron. Widening veins that drain
blood from the brain is suggested as a treatment. Clinical trials have
been initiated on the basis of reports from media and in response to
patient groups rather than on established scientific evidence. It has
been argued that methodological flaws and lack of evidence
disqualify both the venous cause of MS and venous widening as an
appropriate treatment (Valdueza et al., 2013).
Normal myelinated nerve fiber
Nerve affected by MS
4-2 REVIEW
Electrical Activity of a Membrane
Before you continue, check your understanding.
1 . The ___________ results from the unequal distribution of ___________
inside and outside the cell membrane.
2 . Because it is ___________, the cell membrane prevents the efflux of
large protein anions and pumps sodium ions out of the cell to maintain a
slightly ___________ charge in the intracellular fluid relative to the
extracellular fluid.
3 . For a graded potential to arise, a membrane must be stimulated to the
point that the transmembrane charge increases slightly to cause a(n)
___________ or decreases slightly to cause a(n) ___________.
4 . The voltage change associated with a(n) ___________ is sufficiently
large to stimulate adjacent parts of the axon membrane to the threshold
for propagating it along the length of an axon as a(n) ___________.
5 . Briefly explain why nerve impulses travel faster on myelinated than on
unmyelinated axons.
Answers appear at the back of the book.
Results
Both EPSPs and IPSPs last only a few milliseconds before they decay
and the neuron’s resting potential is restored. EPSPs are associated with
the opening of sodium channels, which allows an influx of sodium ions.
IPSPs are associated with the opening of potassium channels, which
allows an efflux of potassium ions (or with the opening of chloride
channels, which allows an influx of chloride ions).
Although the size of a graded potential is proportional to the intensity
of the stimulation, an action potential is not produced on the motor
neuron’s cell body membrane even when an EPSP is strongly excitatory.
The reason is simple: the cell body membrane of most neurons does not
contain voltage-sensitive channels. The stimulation must reach the initial
segment, the area near or overlapping the axon hillock, where the action
potential begins. The initial segment is rich in voltage-sensitive channels
(Bender & Trussel, 2012).
initial segment Area near or overlapping the axon hillock where the
action potential begins.
A brief video at www.youtube.com/watch?v=-qdH3WGL99Q
describes how the initial segment initiates an action potential.
Summation of Inputs
A motor neuron’s myriad dendritic spines can each contribute to
membrane voltage, via either an EPSP or an IPSP. How do these
incoming graded potentials interact at its membrane? What happens if
two EPSPs occur in succession? Does it matter if the time between them
increases or decreases? What happens when an EPSP and an IPSP arrive
together?
(A) EPSPs
Wide temporal spacing
(B) IPSPs
Wide temporal spacing
FIGURE 4-22 Temporal Summation (A) Two depolarizing pulses
stimulation (S1 and S2 ) separated in time produce two EPSPs similar in
size. Pulses close together in time partly sum. Simultaneous EPSPs sum
as one large EPSP. (B) Two hyperpolarizing pulses (S1 and S2 ) widely
separated in time produce two IPSPs similar in size. Pulses coming fast
partly sum. Simultaneous IPSPs sum as one large IPSP.
Temporal Summation
If one excitatory pulse is followed some time later by a second excitatory
pulse, one EPSP is recorded and after a delay, a second identical EPSP is
recorded, as shown at the top left in Figure 4-22 . These two widely
spaced EPSPs are independent and do not interact. If the delay between
them is shortened so that the two occur in rapid succession, however, a
single large EPSP is produced, as shown in the left-center panel of
Figure 4-22 .
Here, the two excitatory pulses are summed—added together to
produce a larger depolarization of the membrane than either would
induce alone. This relation between two EPSPs occurring close together
or even at the same time (bottom left panel) is called temporal
summation. The right side of Figure 4-22 illustrates that equivalent
results are obtained with IPSPs. Therefore, temporal summation is a
property of both EPSPs and IPSPs.
temporal summation Addition of one graded potential to another
that occur close in time.
Spatial Summation
How does spacing affect inputs to the cell body membrane? By using
two recording electrodes (R1 and R2 ) we can see the effects of spatial
relations on the summation of inputs.
If two EPSPs are recorded at the same time but on widely separated
parts of the membrane (Figure 4-23 A ), they do not influence one
another. If two EPSPs occurring close together in time are also close
together on the membrane, however, they sum to form a larger EPSP
(Figure 4-23B ). This spatial summation occurs when two separate
inputs are very close to one another both on the cell membrane and in
time. Similarly, two IPSPs produced at the same time sum if they occur
at approximately the same place and time on the cell body membrane but
not if they are widely separated.
spatial summation Addition of one graded potential to another that
occur close in space.
Role of Ions in Summation
Summation is a property of both EPSPs and IPSPs in any combination.
These interactions make sense when you consider that ion influx and
efflux are being summed. The influx of sodium ions accompanying one
EPSP is added to the influx of sodium ions accompanying a second
EPSP if the two occur close together in time and space. If the two
influxes are remote in time or in space or in both, no summation is
possible.
EPSPs produced at the same time, but on separate
parts of the membrane, do not influence each other.
%%%%%%
(A)
a
Sinclair Stammers/Science Source
(A) C. elegans
(B) Excitation
(C) Inhibition
Light-Sensitive Channels (A) Normal movements of C. elegans. (B) When
light-sensitive channelrhodopsin-2 ion channels are introduced into its
neurons, C. elegans becomes active and coils when exposed to blue light.
Information from Zhang et al., 2007. (c) With light-sensitive halorhodopsin ion
pumps introduced into its muscles, C. elegans elongates and becomes
immobile when exposed to green-yellow light. Information from Liewald et al.,
2008.
4-2 REVIEW
How Neurons Integrate Information
Before you continue, check your understanding.
1 . Graded potentials that decrease the charge on the cell membrane,
moving it toward the threshold level, are called ___________ because
they increase the likelihood that an action potential will occur. Graded
potentials that increase the charge on the cell membrane, moving it
away from the threshold level, are called ___________ because they
decrease the likelihood that an action potential will result.
2 . EPSPs and IPSPs that occur close together in both ___________ and
___________ are summed. This is how a neuron ___________ the
information it receives from other neurons.
3 . The membrane of the ___________ does not contain voltage-
sensitive ion channels, but if summed inputs excite the ___________
to a threshold level, action potentials are triggered and then propagated
as they travel along the cell’s ___________ as a nerve impulse.
4 . Explain what happens during back propagation.
Answers appear at the back of the book.
How Do Neurons
Communicate and
Adapt?
Katherine Streeter
Helgiskulason/Getty Images
Puffins fish by diving underwater, propelling themselves by
flapping their short, stubby wings as if flying. During these
dives, their heart displays the diving bradycardia response, just
as our heart does. Here, a puffin emerges from a dive, fish in
beak.
Parkinson Disease
Case VI: The gentleman … is seventy-two years of age…. About
eleven or twelve, or perhaps more, years ago, he first perceived
weakness in the left hand and arm, and soon after found the
trembling to commence. In about three years afterwards the right
arm became affected in a similar manner: and soon afterwards the
convulsive motions affected the whole body and began to interrupt
speech. In about three years from that time the legs became affected.
(James Parkinson, 1817/1989)
In the 1817 essay from which this case study is taken, British
physician James Parkinson reported similar symptoms in six patients,
some of whom he had observed only in the streets near his clinic.
Shaking was usually the first symptom, and it typically began in a hand.
Over a span of years, the shaking spread to include the arm and then
other parts of the body.
As the disease progressed, patients had a propensity to lean forward
and walk on the balls of their feet. They also tended to run forward to
prevent themselves from falling. In the later stages, patients had
difficulty eating and swallowing. They drooled, and their bowel
movements slowed. Eventually, the patients lost all muscular control
and were unable to sleep because of the disruptive tremors.
More than 50 years after James Parkinson’s descriptions, French
neurologist Jean-Martin Charcot named the condition Parkinson’s
disease, known today as Parkinson disease. Three findings have
helped researchers understand its neural basis:
Parkinson disease Motor system disorder correlated with
dopamine loss in the substantia nigra; characterized by tremors,
muscular rigidity, and reduction in voluntary movement.
1. In 1919, Constantin Tréatikoff (1974) studied the brains of nine
Parkinson patients on autopsy and found that the substantia nigra, a
small midbrain nucleus, had degenerated. In the brain of one patient
who had parkinsonian symptoms only on one side of the body, the
substantia nigra had degenerated on the side opposite that of the
symptoms.
2. Chemical examination of the brains of Parkinson patients showed
that disease symptoms appear when the level of dopamine, then a
proposed neurotransmitter, was reduced to less than 10 percent of
normal in the basal ganglia (Ehringer & Hornykiewicz, 1960/1974).
dopamine (DA) Amine neurotransmitter involved in coordinating
movement, attention, learning, and in reinforcing behaviors.
3. Confirming the role of dopamine in a neural pathway connecting the
substantia nigra to the basal ganglia, Urban Ungerstedt found in 1971
that injecting a neurotoxin called 6-hydroxydopamine into rats
selectively destroyed these dopamine-containing neurons and
produced symptoms of Parkinson disease.
Researchers have now linked the loss of dopamine-containing
substantia nigra neurons to an array of causes, including genetic
predisposition, the flu, pollution, insecticides, herbicides, and toxic
drugs. Dopamine itself in other brain areas has been linked not only to
motor behavior but also to some forms of learning and to neural
structures that mediate reward and addiction. This remarkable series of
discoveries, initiated by James Parkinson, has yielded tremendous
insight into brain function.
Universal Pictures/Photofest
Photo by Mike Coppola/Getty Images for the Michael J. Fox Foundation for Parkinson’s
Research
Actor Michael J. Fox gained wide fame in the 1980s for his starring role in the
Back to the Future film series, which included his rendition of Chuck Berry’s pop
classic, “Johnny B. Goode” (left). In 1991, at age 30, Fox was diagnosed with
young-onset Parkinson disease. When he performed the song 20 years later to
benefit the Michael J. Fox Foundation for Parkinson’s Research, he labored but
still had the moves (right).
Structure of Synapses
Loewi’s discovery about the chemical messengers that regulate heart rate
was the first of two seminal findings that form the foundation for current
understanding of how neurons communicate. The second had to wait nearly
30 years, for the invention of the electron microscope. Shown at the right in
Figure 5-1 , it enables scientists to see the structure of a synapse.
Biophoto Associates/Science Source
Light microscope
Joseph F. Gennaro Jr./Science Source, Colorization by: Mary Martin
Electron microscope
FIGURE 5-1 Microscopic Advance Whereas an observer can use a light microscope
(left) to see the general features of a cell, an electron microscope (right) allows the observer
to examine the cell’s organelles in detail.
(A)
(B)
FIGURE 5-2 Chemical Synapse (A) Surrounding the central synapse
in this electron micrograph are glial cells, axons, dendrites, other synapses,
and a synaptic cleft. (B) Within a chemical synapse, storage granules hold
vesicles containing neurotransmitter that travel to the presynaptic membrane
in preparation for release. Neurotransmitter is expelled into the synaptic cleft
by exocytosis, crosses the cleft, and binds to receptor proteins on the
postsynaptic membrane.
Varieties of Synapses
So far we have considered a generic chemical synapse, with features
possessed by most synapses. Synapses vary widely in the nervous system.
Each type is specialized in location, structure, function, and target. Figure
5-6 illustrates this diversity on a single hypothetical neuron.
You have already encountered two kinds of synapses. One is the
axomuscular synapse, in which an axon synapses with a muscle end plate,
releasing acetylcholine. The other synapse familiar to you is the
axodendritic synapse, detailed in Figure 5-2 B, in which the axon terminal
of a neuron synapses with a dendrite or dendritic spine of another neuron.
Figure 4-26 shows both microscopic and schematic views of an
axomuscular synapse
Figure 5-6 diagrams axon terminals at the axodendritic synapse as well
as the axosomatic synapse, at a cell body; the axoaxonic synapse, on
another axon; and the axosynaptic synapse, on another presynaptic terminal
—that is, at the synapse between some other axon and its target.
Axoextracellular synapses have no specific targets but instead secrete their
transmitter chemicals into the extracellular fluid. In the axosecretory
synapse, a terminal synapses with a tiny blood vessel, a capillary, and
secretes its transmitter directly into the blood. Finally, synapses are not
limited to axon terminals. Dendrites also may send messages to other
dendrites through dendrodendritic synapses.
This wide variety of connections makes the synapse a versatile chemical
delivery system. Synapses can deliver transmitters to highly specific sites or
diffuse locales. Through connections to the dendrites, cell body, or axon of
a neuron, transmitters can control the neuron’s actions in different ways.
Through axosynaptic connections, they can also exert exquisite control
over another neuron’s input to a cell. By excreting transmitters into
extracellular fluid or into the blood, axoextracellular and axosecretory
synapses can modulate the function of large areas of tissue or even the
entire body. Many transmitters secreted by neurons act as hormones
circulating in your blood, with widespread influences on your body.
Gap junctions, shown in Figure 5-3 , further increase the signaling
diversity between neurons. Such interneuronal communication may occur
via dendrodendritic and axoaxonic gap junctions. Gap junctions also allow
neighboring neurons to synchronize their signals through somatosomatic
(cell body to cell body) connections, and they allow glial cells, especially
astrocytes, to pass nutrient chemicals to neurons and to receive waste
products from them.
More about hormones in Section 6-5 .
Receptors
Subsequent to Otto Loewi’s 1921 discovery that excitatory and inhibitory
chemicals control heart rate, many researchers thought that the brain must
work under much the same type of dual control. They reasoned that
norepinephrine and acetylcholine were the transmitters through which
excitatory and inhibitory brain cells worked. They did not imagine what
we know today: the human brain employs a dazzling variety of
neurotransmitters and receptors. The neurotransmitters operate in even
more versatile ways: some may be excitatory at one location and
inhibitory at another, for example, and two or more may team up in a
single synapse so that one makes the other more potent. Moreover, each
neurotransmitter may interact with several varieties of receptors, each
with a somewhat different function.
In this section, you will learn how neurotransmitters are identified and
how they fit within four broad categories on the basis of their chemical
structure. The functional aspects of neurotransmitters interrelate and are
intricate, with no simple one-to-one relation between a single
neurotransmitter and a single behavior. Furthermore, receptor variety is
achieved by the unique combination of protein molecules that come
together to form a functional receptor.
FIGURE 5-9 Renshaw Loop Left: Some spinal cord motor neurons
project to the rat’s forelimb muscles. Right: In a Renshaw loop the main
motor axon (green) projects to a muscle, and its axon collateral remains in
the spinal cord to synapse with a Renshaw interneuron (red). The
Renshaw interneuron contains the inhibitory transmitter glycine, which acts
to prevent motor neuron overexcitation. Both the main motor axon and its
collateral terminals contain acetylcholine. When the motor neuron is highly
excited, it can modulate its activity level through the Renshaw loop (plus
and minus signs).
Small-Molecule Transmitters
The first neurotransmitters identified are the quick-acting small-molecule
transmitters, such as acetylcholine. Typically, they are synthesized from
dietary nutrients and packaged ready for use in axon terminals. When a
small-molecule transmitter has been released from a terminal button, it
can quickly be replaced at the presynaptic membrane.
small-molecule transmitter Quick-acting neurotransmitter
synthesized in the axon terminal from products derived from the diet.
Because small-molecule transmitters or their main components are
derived from the food we eat, diet can influence their abundance and
activity in our bodies. This fact is important in the design of drugs that act
on the nervous system. Many neuroactive drugs are designed to reach the
brain by the same route that small-molecule transmitters or their
precursor chemicals follow: the digestive tract.
Taking drugs orally is easy and comparatively safe, but not all drugs
can traverse the digestive tract. Section 6-1 explains.
Table 5-1 lists some of the best-known and most extensively studied
small-molecule transmitters. In addition to acetylcholine, four amines
(related by a chemical structure that contains an NH group, or amine )
and three amino acids are included in this list. A few other substances,
including histamine, also are classified as small-molecule transmitters.
Among its many functions, which include control of arousal and of
waking, the transmitter histamine (H) can cause the constriction of
smooth muscles. When activated in allergic reactions, histamine
contributes to asthma, a constriction of the airways. You are probably
familiar with antihistamine drugs used to treat allergies.
histamine (H) Neurotransmitter that controls arousal and waking;
can cause the constriction of smooth muscles; when activated in
allergic reactions, constricts airway and contributes to asthma.
ACETYLCHOLINE SYNTHESIS Acetylcholine is present at the junction of
neurons and muscles, including the heart, as well as in the CNS. Figure
5-10 illustrates how ACh molecules are synthesized from choline and
acetate by two enzymes, then broken down. Choline is among the
breakdown products of fats in foods such as egg yolk, avocado, salmon,
and olive oil; acetate is a compound found in acidic foods, such as
vinegar and lemon juice.
As depicted in Figure 5-10 , inside the cell, acetyl coenzyme A (acetyl
CoA) carries acetate to the synthesis site, and a second enzyme, choline
acetyltransferase (ChAT), transfers the acetate to choline to synthesize
acetylcholine. After ACh has been released into the synaptic cleft and
diffuses to receptor sites on the postsynaptic membrane, a third enzyme,
acetylcholinesterase (AChE), reverses the process, breaking down the
transmitter by detaching acetate from choline. The breakdown products
can then be taken back into the presynaptic terminal for reuse.
Acetylcholine (ACh)
Histamine (H)
Amines
Dopamine (DA)
Serotonin (5-HT)
Amino acids
Glutamate (Glu)
Glycine (Gly)
Peptide Transmitters
More than 50 short amino acid chains of various lengths (fewer than 100)
form the families of peptide transmitters, or neuropeptides, listed in
Table 5-2 . Synthesized through the translation of mRNA from
instructions contained in the neuron’s DNA, neuropeptides are
multifunctional chains of amino acids that act as neurotransmitters.
neuropeptide Short (fewer than 100), multifunctional amino acid
chain; acts as a neurotransmitter and can act as a hormone; may
contribute to learning.
In some neurons, peptide transmitters are made in the axon terminal,
but most are assembled on the neuron’s ribosomes, packaged in a
membrane by Golgi bodies, and transported by the microtubules to the
axon terminals. The entire process of neuropeptide synthesis and
transport is relatively slow compared with the nearly ready-made small-
molecule neurotransmitters. Consequently, peptide transmitters act slowly
and are not replaced quickly.
Figure 3-15 diagrams peptide bonding and Figure 3-17 , protein
export.
Neuropeptides, however, perform an enormous range of functions in
the nervous system, as might be expected from their large numbers. They
act as hormones that respond to stress, enable a mother to bond with her
infant, regulate eating and drinking and pleasure and pain, and probably
contribute to learning.
Opium and related synthetic chemicals such as morphine, long known
both to produce euphoria and to reduce pain, appear to mimic the actions
of endogenous brain opioid neuropeptides: enkephalins, dynorphins, and
endorphins. (The term enkephalin derives from the phrase in the
cephalon, meaning in the brain or head, whereas the term endorphin is a
shortened form of endogenous morphine. )
FIGURE 5-12 Amino Acid Transmitters Top: Removal of a carboxyl (COOH)
group from the bottom of the glutamate molecule produces GABA. Bottom: Their different
shapes, illustrated by three-dimensional space-filling models, thus allow these amino acid
transmitters to bind to different receptors.
Somatostatins Somatostatin
Lipid Transmitters
Predominant among the lipid neurotransmitters are the
endocannabinoids (endogenous cannabinoids), a class of lipid
neurotransmitters synthesized at the postsynaptic membrane to act on
receptors at the presynaptic membrane. The endocannabinoids include
anandamide and 2-AG (2-arachidonoylglycerol), both derived from
arachidonic acid, an unsaturated fatty acid. Poultry and eggs are
especially good sources. Endocannabinoids participate in a diverse set of
physiological and psychological processes that affect appetite, pain, sleep,
mood, memory, anxiety, and the stress response. Their scientific history is
brief but illustrates how science can progress, punctuated by short steps.
endocannabinoid Class of lipid neurotransmitters, including
anandamide and 2-AG, synthesized at the postsynaptic membrane to
act on receptors at the presynaptic membrane; affects appetite, pain,
sleep, mood, memory, anxiety, and the stress response.
CLINICAL FOCUS 5-3
Everett Coll
The movie Awakenings recounts the L -dopa trials conducted by
Oliver Sacks and described in his book of the same title.
Varieties of Receptors
Each of the two general classes of receptor proteins produces a different
effect: one directly changes the postsynaptic membrane’s electrical
potential, and the other induces cellular change indirectly. A dazzling
array of receptor subtypes allows for subtle differences in receptor
function.
Two Classes of Receptors
When a neurotransmitter is released from any of the wide varieties of
synapses onto a wide variety of targets, as illustrated in Figure 5-6 , it
crosses the synaptic cleft and binds to a receptor. What happens next
depends on the receptor type.
Structurally, ionotropic receptors resemble voltage-sensitive
channels, which propagate the action potential. See Figure 4-17 .
Ionotropic receptors allow the ions, such as Na1 , K1 , and Ca2 1, to
move across a membrane (the suffix -tropic means moving toward ). As
Figure 5-14 illustrates, an ionotropic receptor has two parts: (1) a binding
site for a neurotransmitter and (2) a pore, or channel. When the
neurotransmitter attaches to the binding site, the receptor quickly changes
shape, either opening the pore and allowing ions to flow through it or
closing the pore and blocking the ion flow. Thus, ionotropic receptors
bring about rapid changes in membrane voltage and are usually
excitatory: they trigger an action potential.
ionotropic receptor Embedded membrane protein; acts as (1) a
binding site for a neurotransmitter and (2) a pore that regulates ion
flow to directly and rapidly change membrane voltage.
In contrast, a metabotropic receptor has a binding site for a
neurotransmitter but lacks its own pore through which ions can flow.
Through a series of steps, activated metabotropic receptors indirectly
produce changes in nearby membrane-bound ion channels or in the cell’s
metabolic activity. Figure 5-15 A shows the first of these two indirect
effects. The metabotropic receptor consists of a single protein that spans
the cell membrane, its binding site facing the synaptic cleft. Each receptor
is coupled to one of a family of guanyl nucleotide–binding proteins, G
proteins for short, shown on the inner side of the cell membrane in
Figure 5-15 A. When activated, a G protein binds to other proteins.
metabotropic receptor Embedded membrane protein with a binding
site for a neurotransmitter linked to a G protein; can affect other
receptors or act with second messengers to affect other cellular
processes, including opening a pore.
G protein Guanyl nucleotide–binding protein coupled to a
metabotropic receptor; when activated, binds to other proteins.
Dopamine (DA) — D1 , D2 , D3 , D4 , D5
Histamine (H) — H1 , H2 , H3
Norepinephrine — α 1a , α 1b , α 1c , α 1d , α 2a , α 2b , α 2c , α 2d , β 1 , β 2 , β
(NE) 3
* Peptide neurotransmitters and the lipid neurotransmitters anandamide and 2-AG have
specific metabotropic-class receptors. Gaseous neurotransmitters do not have a specific
receptor.
† All metabotropic cholinergic receptors are muscarinic.
It should not be surprising that a brain such as ours, with its incredible
complexity, is built upon a vast array of units, including copious
neurotransmitter types and even more copious receptor types. All this,
and more, allows the human brain to function successfully.
5-2 REVIEW
Varieties of Neurotransmitters and Receptors
Before you continue, check your understanding.
1 . Neurotransmitters are identified using four experimental criteria:
___________, ___________, ___________, and ___________.
2 . The four broad classes of chemically related neurotransmitters are
___________, ___________, ___________, and ___________.
3 . Acetylcholine is composed of ___________ and ___________. After
release into the synaptic cleft, ACh is broken down by ___________,
and the products can be recycled.
4 . Endocannabinoids are ___________ neurotransmitters, made on
demand and released from the ___________ membrane.
5 . Contrast the major characteristics of ionotropic and metabotropic
receptors.
Answers appear at the back of the book.
Behavior
When researchers began to study neurotransmission, they reasoned that
any given neuron would contain only one transmitter at all its axon
terminals. Newer methods of analysis revealed that this hypothesis isn’t
strictly accurate. A single neuron may use one transmitter at one synapse
and a different transmitter at another synapse. Moreover, different
transmitters may coexist in the same terminal or synapse. Neuropeptides
have been found to coexist in terminals with small-molecule transmitters,
and more than one small-molecule transmitter may be found in a single
synapse. In some cases, more than one transmitter may even be packaged
within a single vesicle.
All these findings allow for multiple combinations of
neurotransmitters and receptors for them. They caution as well against
assuming a simple cause-and-effect relation between a neurotransmitter
and a behavior. What are the functions of so many combinations? The
answer will likely vary, depending on the behavior that is controlled.
Generally, neurotransmission is simplified by concentrating on the
dominant transmitter within any given axon terminal. The neuron and its
dominant transmitter can then be associated with a function or behavior.
We now consider some links between neurotransmitters and behavior.
We begin by exploring the three peripheral nervous system divisions:
SNS, ANS, and ENS. Then we investigate neurotransmission in the
central nervous system.
Basal ganglia
Cholinergic System
Figure 5-18 shows in cross section a rat brain stained for the enzyme
acetylcholinesterase (AChE), which breaks down ACh in synapses, as
diagrammed earlier in Figure 5-10 . The darkly stained areas have high
AChE concentrations, indicating the presence of cholinergic terminals.
AChE permeates the cortex and is especially dense in the basal ganglia.
Many of these cholinergic synapses are connections from ACh nuclei in
the brainstem, as illustrated in the top panel of Figure 5-17 .
The EEG detects electrical signals the brain emits during various
conscious states; see Sections 7-2 and 13-3 .
The cholinergic system participates in typical waking behavior,
attention, and memory. For example, cholinergic neurons take part in
producing one form of waking EEG activity. People affected by the
degenerative Alzheimer disease, which begins with minor forgetfulness,
progresses to major memory dysfunction, and later develops into
generalized dementia, show a profound loss of cholinergic neurons at
autopsy. One treatment strategy for Alzheimer disease is drugs that
stimulate the cholinergic system to enhance alertness. But the beneficial
effects of these drugs are minor at best (Herrmann et al., 2011). Recall
that ACh is synthesized from nutrients in food; thus, the role of diet in
maintaining acetylcholine levels also is being investigated.
Alzheimer disease Degenerative brain disorder related to aging;
first appears as progressive memory loss and later develops into
generalized dementia.
Focus 14-3 details research on Alzheimer disease. Section 16-3
reviews dementias’ causes and treatments.
The brain abnormalities associated with Alzheimer disease are not
limited to the cholinergic neurons, however. Autopsies reveal extensive
damage to the neocortex and other brain regions. As a result, what role,
if any, the cholinergic neurons play in the progress of the disorder is not
yet clear. Perhaps their destruction causes degeneration in the cortex or
perhaps the cause-and-effect relation is the other way around, with
cortical degeneration causing cholinergic cell death. Then too, the loss of
cholinergic neurons may be just one of many neural symptoms of
Alzheimer disease.
Dopaminergic System
Figure 5-17 maps the dopaminergic activating system’s two distinct
pathways. The nigrostriatal dopaminergic system plays a major role in
coordinating movement. As described throughout this chapter in relation
to parkinsonism, when dopamine neurons in the substantia nigra are lost,
the result is a condition of extreme muscular rigidity. Opposing muscles
contract at the same time, making it difficult for an affected person to
move.
Parkinson patients also exhibit rhythmic tremors, especially of the
limbs, which signals a release of formerly inhibited movement. Although
the causes of Parkinson disease are not fully known, it can actually be
triggered by the ingestion of certain toxic drugs, as described in Clinical
Focus 5-4 , The Case of the Frozen Addict. Those drugs may act as
selective neurotoxins that specifically kill dopamine neurons in the
substantia nigra.
Dopamine in the mesolimbic dopaminergic system may be the
neurotransmitter most affected in addiction—to food, to drugs, and to
other behaviors that involve a loss of impulse control. A common feature
of addictive behaviors is that stimulating the mesolimbic dopaminergic
system enhances responses to environmental stimuli, thus making those
stimuli attractive and rewarding. Indeed, some Parkinson patients who
take dopamine receptor agonists as medications show a loss of impulse
control that manifests in such behaviors as pathological gambling,
hypersexuality, and compulsive shopping (Moore et al., 2014).
Sections 6-3 , 6-4 , and 12-3 describe drug effects on the mesolimbic
DA system. Sections 6-2 and 7-4 discuss schizophrenia’s possible
causes and Section 16-4 , its neurobiology.
Excessive mesolimbic dopaminergic activity is proposed as well to
play a role in schizophrenia, a behavioral disorder characterized by
delusions, hallucinations, disorganized speech, blunted emotion,
agitation or immobility, and a host of associated symptoms.
Schizophrenia is one of the most common and most debilitating
psychiatric disorders, affecting about 1 in 100 people.
schizophrenia Behavioral disorder characterized by delusions,
hallucinations, disorganized speech, blunted emotion, agitation or
immobility, and a host of associated symptoms.
Noradrenergic System
Norepinephrine (noradrenaline) may participate in learning by
stimulating neurons to change their structure. Norepinephrine may also
facilitate healthy brain development and contribute to organizing
movements. A neuron that uses norepinephrine as its transmitter is
termed a noradrenergic neuron (derived from adrenaline, the Latin
name for epinephrine ).
noradrenergic neuron From adrenaline, Latin for epinephrine ; a
neuron containing norepinephrine.
In the main, behaviors and disorders related to the noradrenergic
system concern the emotions. Some symptoms of major depression —a
mood disorder characterized by prolonged feelings of worthlessness and
guilt, the disruption of typical eating habits, sleep disturbances, a general
slowing of behavior, and frequent thoughts of suicide—may be related to
decreased activity of noradrenergic neurons. Conversely, some
symptoms of mania (excessive excitability) may be related to increased
activity in these same neurons. Decreased NE activity has also been
associated both with hyperactivity and attention-deficit/hyperactivity
disorder (ADHD).
major depression Mood disorder characterized by prolonged
feelings of worthlessness and guilt, the disruption of normal eating
habits, sleep disturbances, a general slowing of behavior, and
frequent thoughts of suicide.
mania Disordered mental state of extreme excitement.
Serotonergic System
The serotonergic activating system maintains a waking EEG in the
forebrain when we move and thus participates in wakefulness, as does
the cholinergic system. Like norepinephrine, serotonin plays a role in
learning, as described next in Section 5-4 . Some symptoms of
depression may be related to decreased activity in serotonin neurons, and
drugs commonly used to treat depression act on 5-HT neurons.
Consequently, two forms of depression may exist, one related to
norepinephrine and another related to serotonin.
Likewise, some research results suggest that various symptoms of
schizophrenia also may be related to increases in serotonin activity,
which implies that different forms of schizophrenia may exist. Decreased
serotonergic activity is related to symptoms observed in obsessive-
compulsive disorder (OCD), in which a person compulsively repeats
acts (such as hand washing) and has repetitive and often unpleasant
thoughts (obsessions). Evidence also points to a link between
abnormalities in serotonergic nuclei and conditions such as sleep apnea
and sudden infant death syndrome (SIDS).
obsessive-compulsive disorder (OcD) Behavior characterized by
compulsively repeated acts (such as hand washing) and repetitive,
often unpleasant, thoughts (obsessions).
Consult the Index of Disorders inside the book’s front cover for
more information on major depression, mania, ADHD, OCD, sleep
apnea, and SIDS.
5-3 REVIEW
Neurotransmitter Systems and Behavior
Before you continue, check your understanding.
1 . Although neurons can synthesize more than one ___________, they
are usually identified by the principal ___________ in their axon
terminals.
2 . In the peripheral nervous system, the neurotransmitter at somatic
muscles is ___________; in the autonomic nervous system,
___________ neurons from the spinal cord connect with ___________
neurons for parasympathetic activity and with ___________ neurons
for sympathetic activity.
3 . The two principal small-molecule transmitters used by the enteric
nervous system are ___________ and ___________.
4 . The four main activating systems of the CNS are ___________,
___________, ___________, and ___________.
5 . How would you respond to the comment that a behavior is caused
solely by a chemical imbalance in the brain?
Answers appear at the back of the book.
Aplysia californica
Section 14-4 investigates the neural bases of brain plasticity in
conscious learning and in memory.
Habituation Response
In habituation, the response to a stimulus weakens with repeated
stimulus presentations. If you are accustomed to living in the country,
then move to a city, you might at first find the sounds of traffic and
people extremely loud and annoying. With time, however, you stop
noticing most of the noise most of the time. You have habituated to it.
habituation Learned behavior in which the response to a stimulus
weakens with repeated presentations.
Habituation develops with all our senses. When you first put on a
shoe, you feel it on your foot, but very soon it is as if the shoe were not
there. You have not become insensitive to sensations, however. When
people talk to you, you still hear them; when someone steps on your foot,
you still feel the pressure. Your brain simply has habituated to the
customary background sensation of a shoe on your foot.
Aplysia habituates to waves in the shallow tidal zone where it lives.
These slugs are constantly buffeted by the flow of waves against their
body, and they learn that waves are just the background noise of daily
life. They do not flinch and withdraw every time a wave passes over
them. They habituate to this stimulus.
A sea slug that is habituated to waves remains sensitive to other touch
sensations. Prodded with a novel object, it responds by withdrawing its
siphon and gill. The animal’s reaction to repeated presentations of the
same novel stimulus forms the basis for Experiment 5-2 , studying its
habituation response.
Neural Basis of Habituation
The Procedure section of Experiment 5-2 shows the setup for studying
what happens to the withdrawal response of Aplysia ’s gill after repeated
stimulation. A gentle jet of water is sprayed on the siphon while gill
movement is recorded. If the water jet is presented to Aplysia ’s siphon
as many as 10 times, the gill withdrawal response is weaker some
minutes later, when the animal is again tested. The decrement in the
strength of the withdrawal is habituation, which can last as long as 30
minutes.
The Results section of Experiment 5-2 starts by showing a simple
representation of the pathway that mediates Aplysia ’s gill withdrawal
response. For purposes of illustration, only one sensory neuron, one
motor neuron, and one synapse are shown; in actuality, about 300
neurons may take part in this response. The water jet stimulates the
sensory neuron, which in turn stimulates the motor neuron responsible
for the gill withdrawal. But exactly where do the changes associated with
habituation take place? In the sensory neuron? In the motor neuron? In
the synapse between the two?
Habituation does not result from an inability of either the sensory or
the motor neuron to produce action potentials. In response to direct
electrical stimulation, both the sensory neuron and the motor neuron
retain the ability to generate action potentials even after habituation.
Electrical recordings from the motor neuron show that as habituation
develops, the excitatory postsynaptic potentials (EPSPs) in the motor
neuron become smaller.
The most likely way in which these EPSPs decrease in size is that the
motor neuron is receiving less neurotransmitter from the sensory neuron
across the synapse. And if less neurotransmitter is being received, then
the changes accompanying habituation must be taking place in the
presynaptic axon terminal of the sensory neuron.
EXPERIMENT 5-2
Results
Conclusion: The withdrawal response weakens with repeated presentation of water jet
(habituation) owing to decreased Ca2 1 influx and subsequently less neurotransmitter
release from the presynaptic axon terminal.
Sensitization Response
A sprinter crouched in her starting blocks is often hyperresponsive to the
starter’s gun: its firing triggers in her a rapid reaction. The stressful,
competitive context of the race helps to sensitize the sprinter to this sound.
Sensitization, an enhanced response to some stimulus, is the opposite of
habituation. The organism becomes hyperresponsive to a stimulus rather
than accustomed to it.
sensitization Learned behavior in which the response to a stimulus
strengthens with repeated presentations.
Sensitization occurs within a context. Sudden, novel stimulation
heightens our general awareness and often results in larger-than-typical
responses to all kinds of stimulation. If a loud noise startles you suddenly,
you become much more responsive to other stimuli in your surroundings,
including some to which you previously were habituated. In
posttraumatic stress disorder (PTSD), physiological arousal related to
recurring memories and dreams surrounding a traumatic event persist for
months or years after the event. One characteristic of PTSD is a
heightened response to stimuli, suggesting that the disorder is in part
related to sensitization.
posttraumatic stress disorder (PtsD) Syndrome characterized by
physiological arousal associated with recurrent memories and dreams
arising from a traumatic event that occurred months or years earlier.
Stress can foster and prolong PTSD effects. See Sections 6-5 and 12-
4 . Section 16-4 covers treatment strategies.
The same thing happens to Aplysia. Sudden, novel stimuli can heighten
a slug’s responsiveness to familiar stimulation. When attacked by a
predator, for example, the slug displays heightened responses to many
other stimuli in its environment. In the laboratory, a small electric shock to
Aplysia ’s tail mimics a predatory attack and effects sensitization, as
illustrated in the Procedure section of Experiment 5-3 . A single electric
shock to the slug’s tail enhances its gill withdrawal response for a period
that lasts for minutes to hours.
EXPERIMENT 5-3
Results
Conclusion: Enhancement of the withdrawal response after a shock is due to increased
Ca2 1 influx and subsequently more neurotransmitter release from the presynaptic axon
terminal.
Cognitive Enhancement
A new name for an old game? An article in the preeminent science
publication Nature floated the idea that certain “cognitive-
enhancing” drugs improve school and work performance in
otherwise healthy individuals by improving brain function (Greely et
al., 2008). The article was instigated in part by reports that up to 20
percent—and in some schools up to 80 percent—of high school and
university students were using the combination of Adderall (mainly
dextroamphetamine) and methylphenidate (Ritalin) as a study aid to
help meet deadlines and to cram for examinations.
Both drugs are prescribed as a treatment for attention-
deficit/hyperactivity disorder (ADHD), a developmental disorder
characterized by core behaviors: impulsivity, hyperactivity, and/or
inattention. Methylphenidate and dextroamphetamine are Schedule II
drugs, signifying that they carry the potential for abuse and require a
prescription when used medically. Their main illicit source is
through falsified prescriptions or purchase from someone who has a
prescription. Both drugs share the pharmacological properties of
cocaine: stimulating dopamine release and blocking its reuptake (see
Section 6-2 ).
attention-deficit/hyperactivity disorder (ADHD)
Developmental disorder characterized by core behavioral
symptoms, including impulsivity, hyperactivity, and/or
inattention.
The use of cognitive enhancers is not new. In his classic paper on
cocaine, Viennese psychoanalyst Sigmund Freud stated in 1884,
“The main use of coca [cocaine] will undoubtedly remain that which
the Indians [of Peru] have made of it for centuries … to increase the
physical capacity of the body.” Freud later withdrew his endorsement
when he realized that cocaine is addictive.
In 1937, an article in the Journal of the American Medical
Association reported that a form of amphetamine, Benzedrine,
improved performance on mental efficiency tests. This information
was quickly disseminated among students, who began using the drug
as a study aid for examinations. In the 1950s, dextroamphetamine,
marketed as Dexedrine, was similarly prescribed for narcolepsy, a
sleep disorder, and used illicitly by students as a study aid.
The complex neural effects of amphetamine stimulants center on
learning at the synapse by means of habituation and sensitization.
With repeated use for nonmedicinal purposes, the drugs can also
begin to produce side effects, including sleep disruption, loss of
appetite, and headaches. Some people develop cardiovascular
abnormalities and/or become addicted to amphetamine.
Treating ADHD with prescription drugs is itself controversial,
despite their widespread use for this purpose. According to Aagaard
and Hansen (2011), assessing the adverse effects of cognitive
enhancement medication is hampered because many participants
drop out of studies and the duration of the studies is short.
Despite the contention that stimulant drugs can improve school
and work performance by improving brain function in otherwise
healthy individuals, evidence for their effectiveness, other than a
transient improvement in motivation, is weak.
To carry out its work, the brain needs, among other substances, oxygen
and glucose for fuel and amino acids to build proteins. Fuel molecules
reach brain cells from the blood, just as carbon dioxide and other waste
products are excreted from brain cells into the blood. Molecules of these
vital substances cross the blood–brain barrier in two ways:
1. Small molecules such as oxygen and carbon dioxide can pass through
the endothelial membrane.
2. Complex molecules of glucose, amino acids, and other food
components are carried across the membrane by active transport
systems or ion pumps—transporter proteins specialized to convey a
particular substance.
Few psychoactive drug molecules are sufficiently small or have the
correct chemical structure to gain access to the CNS. An important
property possessed by those few drugs that have CNS effects, then, is an
ability to cross the blood–brain barrier.
How the Body Eliminates Drugs
After a drug is administered, the body soon begins to break it down
(catabolize) and remove it. Drugs are diluted throughout the body and are
sequestered in many regions, including fat cells. They are also catabolized
throughout the body, including in the kidneys and liver, and in the
intestine by bile. They are excreted in urine, feces, sweat, breast milk, and
exhaled air. Drugs developed for therapeutic purposes are usually
designed not only to increase their chances of reaching their targets but
also to enhance their survival time in the body.
The liver is especially active in catabolizing drugs. Owing to a family
of enzymes involved in drug catabolism, the cytochrome P450 enzyme
family (some are also present in the gastrointestinal tract microbiome), the
liver is capable of breaking down many different drugs into forms more
easily excreted from the body. Substances that cannot be catabolized or
excreted can build up in the body and become toxic. The metal mercury,
for instance, is not easily eliminated and can produce severe neurological
effects.
Catabolic processes break down; metabolic processes build up.
Drugs eliminated from the body and discharged into the environment
are extensive and problematic. They may be reingested, via food and
water, by many animal species, including humans (Brown et al., 2015).
Some may affect fertility, embryonic development, even the physiology
and behavior of adult organisms. The solution is redesigning waste
management systems to remove by-products eliminated by humans as
well as by other animals (Berninger et al., 2015).
FIGURE 6-3Barrier-Free Brain Sites The pituitary gland is a target for many
blood-borne hormones; the pineal gland, for hormones that affect circadian rhythms. The
area postrema initiates vomiting of noxious substances.
Tolerance
Tolerance is a decreased response to a drug with repeated exposure.
Harris Isbell and coworkers (1955) conducted an experiment that, while
questionable by today’s ethical standards, did suggest how tolerance
comes about. The researchers gave volunteers in a prison enough alcohol
daily in a 13-week period to keep them in a constant state of intoxication.
Yet they found that the participants did not remain drunk for 3 months
straight.
In tolerance, as in habituation, learning takes place when the
response to a stimulus weakens with repeated presentations (see
Experiment 5-2 ).
tolerance Decrease in response to a drug with the passage of time.
When the experiment began, the participants showed rapidly rising
blood alcohol levels and behavioral signs of intoxication, as shown in the
Results section of Experiment 6-1 , on page 178 . Between the twelfth
and twentieth days of alcohol consumption, however, blood alcohol and
the signs of intoxication fell, even though the participants maintained
their alcohol intake. Thereafter, blood alcohol levels and signs of
intoxication fluctuated; one did not always correspond to the other. A
relatively high blood alcohol level was sometimes associated with a low
outward appearance of intoxication. Why?
The three results were the products of three kinds of tolerance, each
much more likely to develop with repeated drug use:
1. In metabolic tolerance, the number of enzymes needed to break down
alcohol in the liver, blood, and brain increases. As a result, any alcohol
consumed is metabolized more quickly, so blood alcohol levels fall.
2. In cellular tolerance, brain cell activities adjust to minimize the effects
of alcohol in the blood. Cellular tolerance can help explain why the
behavioral signs of intoxication may be so low despite a relatively high
blood alcohol level.
3. Learned tolerance explains a drop in outward signs of intoxication. As
people learn to cope with the demands of living under the influence of
alcohol, they may no longer appear intoxicated.
Does it surprise you that learning plays a role in alcohol tolerance? It
has been confirmed in many studies, including a description of the effect
first reported by John Wenger and his coworkers (1981). They trained rats
to prevent electric foot shocks as they walked on a narrow conveyor belt
sliding over an electrified grid. One group of rats received alcohol after
training in walking the belt; another group received alcohol before
training. A third group received training only, and a fourth group received
alcohol only.
After several days’ exposure to their respective conditions, all groups
received alcohol before a walking test. The rats that had received alcohol
before training performed well, whereas those that had received training
and alcohol separately performed just as poorly as those that had never
had alcohol or those that had not been trained. Despite alcohol
intoxication, then, animals can acquire the motor skills needed to balance
on a narrow belt. With motor experience, they can learn to compensate for
being intoxicated.
EXPERIMENT 6-1
Results
Conclusion: Because of tolerance, as the study progressed, much more
alcohol was required to obtain the same level of intoxication that was
produced at the beginning.
Information from H. Isbell, H. F. Fraser, A. Winkler, R. E. Belleville, and A. J.
Eisenman (1955). An experimental study of the etiology of “rum fits” and delirium
tremens. Quarterly Journal of Studies on Alcohol, 16, pp. 1–21.
Sensitization
Drug tolerance is much more likely to develop with repeated use than
with periodic use, but tolerance does not always follow repeated
exposure to a drug. Tolerance resembles habituation in that the response
to the drug weakens with repeated presentations. The drug user may have
the opposite reaction, sensitization—increased responsiveness to
successive equal doses. Whereas tolerance generally develops with
repeated drug use, sensitization is much more likely to develop with
periodic use.
To demonstrate drug sensitization, Terry Robinson and Jill Becker
(1986) isolated rats in observation boxes and recorded their reactions to
an injection of amphetamine, which stimulates dopamine receptors.
Every 3 or 4 days, the investigators injected the rats and found their
motor activities—sniffing, rearing, and walking—more vigorous with
each administration of the same drug dose, as graphed in Results 1 of
Experiment 6-2 .
The increased motor activity on successive tests was not due to the
animals becoming comfortable with the test situation. Control animals
that received no drug failed to display a similar escalation.
Administering the drug to rats in their home cages did not affect activity
in subsequent tests, either. Moreover, the sensitization to amphetamine
was enduring. Even when two injections were separated by months, the
animals still showed an escalation of motor behavior. Even a single
exposure to amphetamine produced sensitization.
Experiment 5-3 describes sensitization at the level of neurons and
synapses. Section 14-4 relates sensitization to neuroplasticity and
learned addictions.
EXPERIMENT 6-2
Antidepressants
MAO inhibitors
Tricyclic antidepressants: imipramine (Tofranil)
SSRIs (atypical antidepressants): fluoxetine (Prozac); sertraline
(Zoloft); paroxetine (Paxil, Seroxat)
Mood stabilizers
Lithium, sodium valproate, carbamazepine (Tegretol)
Group V: Psychotropics
Major Depression
P. H. was a 53-year-old high school teacher who, although popular
with his students, was deriving less and less satisfaction from his
work. His marriage was foundering because he was growing apathetic
and no longer wanted to socialize or go on vacations. He was having
difficulty getting up in the morning and arriving at school on time.
P. H. eventually consulted a physician, complaining of severe chest
pains, which he feared signaled an impending heart attack. He
informed his doctor that a heart attack would be a welcome relief
because it would end his problems. The physician concluded that P.
H. had depression and referred him to a psychiatrist.
Since the 1950s, depression has been treated with antidepressant
drugs, a variety of cognitive-behavioral therapies (CBTs), and
electro-convulsive therapy (ECT), in which electrical current is
passed briefly through one hemisphere of the brain. Of the drug
treatments available, tricyclic antidepressants and SSRIs are favored.
The risk of suicide and self-injurious behaviors is high in major
depression, especially among depressive adolescents who are resistant
to treatment with SSRIs (Asarnow et al., 2011). Even for patients who
do respond positively to SSRI treatment, the benefits may not occur
for weeks.
The glutamate antagonist ketamine, when given in smaller than
anesthetic doses, can produce rapid beneficial effects that last for
weeks, even in patients who are resistant to SSRI medication
(Reinstatler and Youssef, 2015). Ketamine is thus proposed to be
useful as an acute treatment for patients with major depression who
are at risk for suicide and even for patients with bipolar depression
who are at risk for suicide.
Prompted by complaints from family members that antidepressant
drug treatments have caused suicide, especially in children, the U.S.
Food and Drug Administration has advised physicians to monitor the
side effects of SSRIs, including fluoxetine (Prozac), sertraline
(Zoloft), and paroxetine (Paxil, Seroxat). Findings from several
studies show no difference in the suicide rate between children and
adolescents who receive SSRIs and a placebo, and the incidence of
suicide after prescriptions were curtailed subsequent to the FDA
warning actually increased (Isacsson and Rich, 2014).
David Braun/Masterfile
Mood Stabilizers
Bipolar disorder, once referred to as manic-depressive psychosis, is
characterized by periods of depression alternating with normal periods
and periods of intense excitation, or mania. According to the National
Institute of Mental Health, bipolar disorder may affect as much as 2.6%
of the adult population of the United States.
bipolar disorder Mood disorder characterized by periods of
depression alternating with normal periods and periods of intense
excitation, or mania.
The difficulty in treating bipolar disorder with drugs relates to the
difficulty in understanding how a disease produces symptoms that appear
to be opposites: mania and depression. Consequently, bipolar disorder
often is treated with numerous drugs, each directed toward a different
symptom. Mood stabilizers, which include the salt lithium, mute the
intensity of one pole of the disorder, thus making the other less likely to
occur. Lithium does not directly affect mood and so may act by
stimulating mechanisms of neuronal repair, such as the production of
neuron growth factors.
mood stabilizer Drug for treating bipolar disorder; mutes the
intensity of one pole of the disorder, thus making the other pole less
likely to recur.
A variety of drugs for epilepsy (carbamazepine, valproate) have
positive effects; perhaps they mute the excitability of neurons during the
mania phase. And antipsychotic drugs that block D2 receptors effectively
control the hallucinations and delusions associated with mania. It is
important to remember, though, that all these treatments have side effects:
enhancing beneficial effects while minimizing side effects is a major
focus of new drug development (Grande and Vieta, 2015).
Eye Ubiquitous/Corbis
Science Source
Bonnie Kamin/PhotoEdit
FIGURE 6-10 Potent Poppy Opium is obtained from the seeds of the
opium poppy (left). Morphine (center) is extracted from opium, and heroin
(right) is in turn synthesized from morphine.
Group V: Psychotropics
Psychotropic drugs are stimulants that mainly affect mental activity,
motor activity, arousal, perception, and mood. Behavioral stimulants
affect motor activity and mood. Psychedelic and hallucinogenic
stimulants affect perception and produce hallucinations. General
stimulants mainly affect mood.
Behavioral Stimulants
Behavioral stimulants increase motor behavior as well as elevating mood
and alertness. Rapid administration of behavioral stimulants is most likely
to be associated with addiction. As shown in Figure 6-1 , the quicker a
drug reaches its target—in this case, the brain—the quicker it takes effect.
Further, with each obstacle eliminated en route to the brain, drug dosage
can be reduced by a factor of 10, making it cheaper per dose. Two
behavioral stimulants are amphetamine and cocaine.
Amphetamine is a synthetic compound. It was discovered in attempts
to synthesize the CNS neurotransmitter epinephrine, which also acts as a
hormone to mobilize the body for fight or flight in times of stress (see
Figure 6-20 ). Both amphetamine and cocaine are dopamine agonists that
act first by blocking the dopamine reuptake transporter. Interfering with
the reuptake mechanism leaves more dopamine available in the synaptic
cleft. Amphetamine also stimulates dopamine release from presynaptic
membranes. Both mechanisms increase the amount of dopamine available
in synapses to stimulate dopa-mine receptors. As noted in Focus 6-1,
amphetamine-based drugs are widely prescribed to treat ADHD.
amphetamine Drug that releases the neurotransmitter dopamine into
its synapse and like cocaine, blocks dopamine reuptake.
Section 5-1 describes experiments Otto Loewi performed to identify
epinephrine, or adrenaline. Section 7-7 details symptoms and
outcomes of ADHD and the search for an animal model of the
disease.
Timothy Ross/The Image Works
Gregory G. Dimijian/Science Source
Tek Image/Science Photo Library/Science Source
FIGURE 6-11 Behavioral Stimulant Cocaine (left) is obtained from
the leaves of the coca plant (center). Crack cocaine (right) is chemically
altered to form rocks that vaporize when heated at low temperatures.
FIGURE 6-13 Cannabis sativa The hemp plant, an annual herb, grows
over a wide range of altitudes, climates, and soils. Hemp has myriad uses,
including in manufacturing rope, cloth, and paper.
Responses to Drugs
Many behaviors trigger predictable results. You strike the same piano
key repeatedly and hear the same note each time. You flick a light switch
today, and the bulb glows exactly as it did yesterday. This cause-and-
effect consistency does not extend to the effects of psycho-active drugs.
Individuals respond to drugs in remarkably different ways at different
times.
Behavior on Drugs
Ellen is a healthy, attractive, intelligent 19-year-old university freshman
who knows the risks of unprotected sexual intercourse. She learned
about HIV and other sexually transmitted diseases (STDs) in her high
school health class. A seminar about the dangers of unprotected sexual
intercourse was part of her college orientation: seniors provided the
freshmen in her residence free condoms and safe sex literature. Ellen and
her former boyfriend were always careful to use latex condoms during
intercourse.
At a homecoming party in her residence hall, Ellen has a great time,
drinking and dancing with her friends and meeting new people. She is
particularly taken with Brad, a sophomore at her college, and the two of
them decide to go back to her room to order a pizza. One thing leads to
another, and Ellen and Brad have sexual intercourse without using a
condom. The next morning, Ellen wakes up, dismayed and surprised at
her behavior and concerned that she may be pregnant or may have
contracted an STD. She is terrified that she may have AIDS (MacDonald
et al., 2000).
What happened to Ellen? What is it about drugs, especially alcohol,
that make people sometimes do things they would not ordinarily do?
Alcohol links to many harmful behaviors that are costly both to
individuals and to society. These harmful behaviors include not only
unprotected sexual activity but also driving while intoxicated, date rape,
spousal or child abuse and other aggressive behaviors, and crime.
Among the explanations for alcohol’s effects are disinhibition, learning,
and behavioral myopia.
Disinhibition and Impulse Control
An early and still widely held explanation of alcohol’s effects is
disinhibition theory. It holds that alcohol has a selective depressant
effect on the cortical brain region that controls judgment while sparing
subcortical structures, those responsible for more instinctual behaviors,
such as desire. Stated differently, alcohol depresses learned inhibitions
based on reasoning and judgment while releasing the “beast” within.
disinhibition theory Explanation holding that alcohol has a
selective depressant effect on the brain’s frontal cortex, which
controls judgment, while sparing subcortical structures responsible
for more instinctual behaviors, such as desire.
A variation of disinhibition theory argues that the frontal lobes check
impulsive behavior. According to this idea, impulse control is impaired
after drinking alcohol because of a higher relative sensitivity of the
frontal lobes to alcohol. A person may then engage in risky behavior
(Hardee et al., 2014).
Proponents of these theories often excuse alcohol-related behavior,
saying for example, “She was too drunk to know better” or “The boys
had a few too many and got carried away.” Do disinhibition and impulse
control explain Ellen’s behavior? Not entirely. Ellen had used alcohol in
the past and managed to practice safe sex despite its effects. Neither
theory explains why her behavior was different on this occasion. If
alcohol is a disinhibitor, why is it not always so?
Learning
Craig MacAndrew and Robert Edgerton (1969) questioned disinhibition
theory along just these lines in their book Drunken Comportment. They
cite many instances in which behavior under the influence of alcohol
changes from one context to another. People who engage in polite social
activity at home when consuming alcohol may become unruly and
aggressive when drinking in a bar.
Even behavior at the bar may be inconsistent. Take Joe, for example.
While drinking one night at a bar, he acts obnoxious and gets into a fight.
On another occasion, he is charming and witty, even preventing a fight
between two friends; on a third occasion, he becomes depressed and
worries about his problems. MacAndrew and Edgerton also cite
examples of cultures in which people are disinhibited when sober only to
become inhibited after consuming alcohol and cultures in which people
are inhibited when sober and become more inhibited when drinking.
What explains all these differences in alcohol’s effects?
MacAndrew and Edgerton suggested that behavior under the effects
of alcohol is learned. Learned behavior is specific to culture, group, and
setting and can in part explain Ellen’s decision to sleep with Brad. Where
alcohol is used to facilitate social interactions, behavior while intoxicated
is a time-out from more conservative rules regarding dating.
Behavioral Myopia
But Ellen’s lapse in judgment regarding safe sex is more difficult to
explain by learning theory. Ellen had never practiced unsafe sex before
and had never made it a part of her time-out social activities. So why did
she engage in it with Brad?
A different explanation for alcohol-related lapses in judgment,
behavioral myopia (nearsightedness), is the tendency for people under
the influence of (in this case) alcohol to respond to a restricted set of
immediate and prominent cues while ignoring more remote cues and
possible consequences. Immediate and prominent cues are very strong
and obvious and close at hand (Griffin et al., 2010).
behavioral myopia “Nearsighted” behavior displayed under the
influence of alcohol: local and immediate cues become prominent;
remote cues and consequences are ignored.
In an altercation, the person with behavioral myopia will be quicker
than usual to throw a punch, because the fight cue is so strong and
immediate. At a raucous party, the myopic drinker will be more eager
than usual to join in, because the immediate cue of boisterous fun
dominates his or her view. Once Ellen and Brad arrived at Ellen’s room,
the sexual cues at the moment were far more immediate than concerns
about long-term safety. As a result, Ellen responded to those immediate
cues and behaved atypically.
Oliver Furrer/Brand X/Corbis
People who enjoy high-risk adventure may be genetically predisposed to
experiment with drugs, but people with no interest in risk taking are just as
likely to use drugs. Section 6-4 discusses genetic influences on drug
taking.
Wanting-and-Liking Theory
To account for all the facts about drug abuse and addiction, Terry
Robinson and Kent Berridge (2008) proposed the incentive sensitization
theory, also called the wanting-and-liking theory because wanting and
liking are produced by different brain systems. Their wanting is craving,
whereas liking is the pleasure the drug produces. With repeated use,
tolerance for liking develops, and the expression of liking (pleasure)
decreases as a consequence (Figure 6-14 ). In contrast, the system that
mediates wanting sensitizes, and craving increases.
wanting-and-liking theory Explanation holding that when a drug is
associated with certain cues, the cues themselves elicit desire for the
drug; also called incentive sensitization theory.
The first step on the proposed road to drug dependence is the initial
experience, when the drug affects a neural system associated with
pleasure. At this stage, the user may like the substance—including liking
to take it within a given social context. With repeated use, liking the drug
may decline from its initial level. At this stage, the user may also begin to
show tolerance to the drug’s effects and so may begin to increase the
dosage to increase liking.
With each use, the drug taker increasingly associates the cues related
to drug use—be it a hypodermic needle, the room in which the drug is
taken, or the people with whom the drug is taken—with the drug-taking
experience. The user makes this association because the drug enhances
classically conditioned cues associated with drug taking. Eventually,
these cues come to possess incentive salience: they induce wanting, or
craving, the drug-taking experience.
In classical (Pavlovian) conditioning, learning to associate a formerly
neutral stimulus (the sound of a bell) with a stimulus (food) elicits an
involuntary response (salivation).
The neural basis of addiction is proposed to involve multiple brain
systems. The decision to take a drug is made in the prefrontal cortex, an
area that participates in most daily decisions. When a drug is taken, it
activates opioid systems in the brainstem that are generally related to
pleasurable experiences. And wanting drugs may spring from activity in
the mesolimbic pathways of the dopaminergic activating system.
In these mesolimbic pathways, diagrammed in Figure 6-15 , the axons
of dopamine neurons in the midbrain project to structures in the basal
ganglia, to the frontal cortex, and to the limbic system. When drug takers
encounter cues associated with drug taking, the mesolimbic system
becomes active, releasing dopamine. Dopamine release is the neural
correlate of wanting.
Another brain system may be responsible for conditioning drug-related
cues to drug taking. Barry Everitt (2014) proposes that the repeated
pairing of drug-related cues to drug taking forms neural associations, or
learning, in the dorsal striatum, a region in the basal ganglia consisting of
the caudate nucleus and putamen. As the user repeatedly takes the drug,
voluntary control gives way to unconscious processes—a habit. The
result: drug users lose control of decisions related to drug taking, and the
wanting—the voluntary control over drug taking—gives way to the
craving of addiction.
When a rat is placed in an environment where it anticipates a favored
food or sex, investigators record dopamine increases in the striatum
(see Section 7-5 ).
Multiple findings align with the wanting-and-liking explanation of
drug addiction. Ample evidence confirms that abused drugs and the
context in which they are taken initially has a pleasurable effect and that
habitual users continue using their drug of choice, even when taking it no
longer produces any pleasure. Heroin addicts sometimes report that they
are miserable: their lives are in ruins, and the drug is not even pleasurable
anymore. But they still want it. What’s more, desire for the drug often is
greatest just when the addicted person is maximally high, not during
withdrawal. Finally, cues associated with drug taking—the social
situation, the sight of the drug, and drug paraphernalia—strongly
influence decisions to take, or continue taking, a drug.
Notwithstanding support for a dopamine basis for addiction, recent
research suggests more than one type of addiction. Some rats become
readily conditioned to cues associated with reinforcement, for example a
bar that delivers a reward when pressed. Other animals ignore the bar’s
incentive salience but are attracted to the location where they receive
reinforcement. Animals that display the former behavior are termed sign
trackers and the other group, goal trackers. Sign trackers exposed to
addictive drugs appear to attribute incentive salience to drug-associated
cues. Their drug wanting is dependent upon the brain’s dopamine
systems. Goal trackers may also become addicted, possibly via different
neural systems. Such findings imply at least two types of addiction (Yager
et al., 2015).
Peter Dokus/Stone
FIGURE 6-14 Wanting-and-Liking Theory With repeated drug
use, wanting a drug and liking the drug progress in opposite directions.
Wanting (craving) is associated with drug cues.
FIGURE 6-15 Mesolimbic Dopamine Pathways Axons of
neurons in the midbrain ventral tegmentum project to the basal ganglia,
prefrontal cortex, and hippocampus.
Drug-Induced Psychosis
At age 29, R. B. S. smoked marijuana chronically. For years, he had
been selectively breeding a potent strain of marijuana in anticipation
of the day when it would be legalized. R. B. S. made his living as a
pilot, flying small freight aircraft into coastal communities in the
Pacific Northwest.
One evening, R. B. S. had a sudden revelation: he was no longer
in control of his life. Convinced that a small computer had been
implanted in his brain when he was 7 years old and was
manipulating his behavior, he confided in a close friend, who urged
him to consult a doctor. R. B. S. insisted that he had undergone the
surgery when he participated in an experiment at a local university.
He also claimed that all the other children who participated in the
experiment had been murdered.
The doctor told R. B. S. that the computer implantation was
unlikely but called the psychology department at the university and
got confirmation that children had in fact taken part in an experiment
conducted years before. The records of the study had long since been
destroyed. R. B. S. believed that this information completely
vindicated his story. His delusional behavior persisted and eventually
cost him his pilot’s license.
R. B. S. seemed to compartmentalize the delusion. When asked
why he could no longer fly, he intently recounted the story of the
implant and the murders, asserting that its truth had cost him the
medical certification needed for a license. Then he happily and
appropriately discussed other topics.
R. B. S. had a mild focal psychosis: he was losing contact with
reality. In some cases, this break is so severe and the capacity to
respond to the environment so impaired and distorted that the person
can no longer function. People in a state of psychosis may
hallucinate, may have delusions, or may withdraw into a private
world isolated from people and events around them.
A variety of drugs can produce psychosis, including LSD,
amphetamine, cocaine, and, as shown by this case, marijuana. At low
doses THC, the active ingredient in marijuana, has mild sedative-
hypnotic effects similar to those of alcohol. At the high doses that R.
B. S. used, THC can produce euphoria and hallucinations.
Marijuana comes from the leaves of the hemp plant, Cannabis
sativa. Humans have used hemp for thousands of years to make rope,
paper, cloth, and a host of other products. And marijuana has a
number of beneficial medical effects. In the Pacific Northwest,
marijuana is the largest agricultural crop and makes a larger
contribution to the economy than does forestry. In some states
marijuana can legally be purchased for personal use, and in many
states its medical use is legal. Under federal law, however, it remains
illegal everywhere in the United States.
R. B. S.’s heavy marijuana use certainly raises the suspicion that
the drug had some influence on his delusional condition (Wilkinson
et al., 2014). Cannabis use has been reported to moderately increase
the risk of psychotic symptoms in young people and has a much
stronger effect in those with a predisposition for psychosis,
especially if potent strains are used (Di Forti et al., 2015). Although
there is evidence that heavy marijuana use may be associated with
alterations in brain development, it is unclear whether brain
abnormalities are a result of marijuana use or a causal factor in its
use (Lubman et al., 2015).
Jim Wilson/The New York Times/Redux
Employees fill prescriptions at a medical marijuana clinic in San
Francisco. At this writing California is one of more than 20
states that have decriminalized the use of medical marijuana.
6-4 REVIEW
Explaining and Treating Drug Abuse
Before you continue, check your understanding.
1 . The wanting-and-liking theory of addiction suggests that with
repeated use, ___________ of the drug decreases as a result of
___________, while ___________ increases as a result of
___________.
2 . At the neural level, the decision to take a drug is made in the brain’s
___________. Once taken, the drug activates opioid systems related to
pleasurable experiences in the ___________. Drug cravings may
originate in the ___________, and the repeated pairing of drug-related
cues and drug taking forms neural associations in the ___________
that loosen voluntary control over drug taking.
3 . As an alternative to explanations of susceptibility to addiction based
on genetic ___________, ___________ can account both for the
enduring behaviors that support addiction and for the tendency of drug
addiction to be inherited.
4 . It is hard to determine whether recreational drugs cause brain damage
in humans because it is difficult to distinguish the effects of
___________ from the effects of ___________.
5 . Briefly describe the basis for a reasonable approach to treating drug
addiction.
Answers appear at the back of the book.
Homeostatic Hormones
Homeostatic hormones are essential to life. The body’s internal
environment must remain within relatively constant parameters for us to
function. An appropriate balance of sugars, proteins, carbohydrates,
salts, and water is necessary in the blood, in the extracellular
compartments of muscles, in the brain and other body structures, and in
all cells. The internal environment must be maintained regardless of a
person’s age, activities, or conscious state. As children or adults, at rest
or in strenuous work, when we have overeaten or when we are hungry, to
survive we need a relatively constant internal environment.
Homeostasis comes from the Greek words stasis (standing ) and
homeo (in the same place ).
A typical homeostatic function is controlling blood sugar level. After
a meal, digestive processes result in increased glucose in the blood. One
group of cells in the pancreas releases insulin, a homeostatic hormone
that instructs the enzyme glycogen synthase in liver and muscle cells to
start storing glucose in the form of glycogen. The resulting decrease in
glucose decreases the stimulation of pancreatic cells so that they stop
producing insulin, and glycogen storage stops. When the body needs
glucose for energy, another hormone in the liver, glucagon, acts as a
countersignal to insulin. Glucagon stimulates another enzyme, glycogen
phosphorylase, to initiate glucose release from its glycogen storage site.
Normal glucose concentration in the bloodstream varies between 80
and 130 mg per 100 milliliters (about 3.3 oz) of blood.
Diabetes mellitus is caused by a failure of the pancreatic cells to
secrete enough insulin, or any at all. As a result, blood sugar levels can
fall (hypoglycemia) or rise (hyperglycemia). In hyperglycemia, blood
glucose levels rise because insulin does not instruct body cells to take up
glucose. Consequently, cell function, including neuronal function, can
fail through glucose starvation, even in the presence of high glucose
levels in the blood. Chronic high blood glucose levels cause damage to
the eyes, kidneys, nerves, heart, and blood vessels.
In hypoglycemia, inappropriate diet can lead to low blood sugar
severe enough to cause fainting. Eric Steen and his coworkers (2005)
propose that insulin resistance in brain cells may be related to Alzheimer
disease. They raise the possibility that Alzheimer disease may be a third
type of diabetes.
Hunger and eating are influenced by a number of homeostatic
hormones, including leptin and ghrelin. Leptin (from the Greek for thin ),
secreted by adipose (animal fat) tissue, inhibits hunger and so is called
the satiety hormone. Ghrelin (from the Indio-European gher, meaning to
grow ), secreted by the gastrointestinal tract, regulates growth hormones
and energy use. Ghrelin also induces hunger. It is secreted when the
stomach is empty; secretion stops when the stomach is full. Leptin and
ghrelin act on receptors on the same neurons of the arcuate nucleus of
the hypothalamus and so contribute to energy homeostasis by managing
eating.
Gonadal Hormones
We are prepared for our adult reproductive roles by the gonadal
hormones that give us our sexual appearance, mold our identity on the
continuum of male to female, and allow us to engage in sex-related
behaviors. Sex hormones begin to act on us even before we are born and
continue their actions throughout our lives.
The male Y chromosome contains a gene called the sex-determining
region Y, or SRY, gene. If cells in the undifferentiated gonads of the early
embryo contain an SRY gene, they develop into a testis, and if they do
not, they develop into an ovary. In the male, the testes produce
testosterone, which masculinizes the body and the brain.
Section 8-4 explains how gonadal hormones participate in brain
development.
The organizational hypothesis proposes that hormone action in the
course of development alters tissue differentiation. Thus, testosterone
masculinizes the brain early in life, having been taken up in brain cells,
where it is converted into estrogen by the enzyme aromatase. Estrogen
then acts on estrogen receptors to initiate a chain of events that includes
activating certain genes in the cell nucleus. These genes contribute to the
masculinization of brain cells and their interactions with other brain
cells.
organizational hypothesis Proposal that hormonal action during
development alters tissue differentiation; for example, testosterone
masculinizes the brain.
That estrogen, a hormone usually associated with the female,
masculinizes the male brain may seem surprising. Estrogen does not
have the same effect on the female brain, because females have a blood
enzyme that binds to estrogen and prevents its entry into the brain.
Hormones play a somewhat lesser role in producing the female body and
brain, but they control the mental and physical aspects of menstrual
cycles, regulate many facets of pregnancy and birth, and stimulate milk
production for breastfeeding. In males, gonadal hormones demethylate,
and so release, genes in the preoptic area of the hypothalamus to become
active. The expression of these genes influences male sexual
characteristics and behavior. Thus, active methylation of male sex–
related genes maintains the female pheno-type (Nugent et al., 2015).
Gonadal hormones contribute to surprising differences in the brain
and in cognitive behavior and play a role in male–female differences in
drug dependence and addiction (see Section 6-3 ). The male brain is
slightly larger than the female brain after corrections are made for body
size, and the right hemisphere is somewhat larger than the left in males.
The female brain has a higher rate both of cerebral blood flow and of
glucose utilization. Differences in size appear in different brain regions,
including nuclei in the hypothalamus related to sexual function, parts of
the corpus callosum that are larger in females, and a somewhat larger
language region in the female brain.
Section 12-5 describes gonadal hormones’ effects on sexual
behavior. Section 15-5 recounts sex differences in thinking patterns.
Three lines of evidence, summarized by Elizabeth Hampson and
Doreen Kimura (2005), support the conclusion that sex-related cognitive
differences result from these anatomical brain differences and that these
cognitive differences also depend in part on the continuing circulation of
the sex hormones. The evidence:
1. Spatial and verbal tests given to females and males in many different
settings and cultures show that males tend to excel in spatial tasks and
females in verbal tasks.
2. Results of similar tests given to female participants in the course of
the menstrual cycle show fluctuations in these test scores with phases
of the cycle. During the phase in which the female sex hormones
estradiol (metabolized from estrogen) and progesterone are at their
lowest levels, women perform comparatively better on spatial tasks;
during the phase in which levels of these hormones are high, women
do comparatively better on verbal tasks.
3. Tests comparing premenopausal and postmenopausal women, women
in various stages of pregnancy, and females and males with varying
levels of circulating sex hormones all provide some evidence that
hormones affect cognitive function.
Sex hormone–related differences in cognitive function are not huge.
Performance scores between males and females overlap broadly. Yet
statistically, the differences are reliable. Similar influences of sex
hormones on behavior are found in other species. Berthold’s rooster
experiment described earlier shows the behavioral effects of testosterone.
Findings from many studies demonstrate that motor skills in female
humans and other animals improve at estrus, a time when progesterone
levels are high.
Anabolic–Androgenic Steroids
A class of synthetic hormones related to testosterone has both muscle-
building (anabolic) and masculinizing (androgenic) effects. Commonly
known simply as anabolic steroids, they were synthesized originally to
build body mass and enhance endurance. Russian weight lifters were the
first to use them, in 1952, to enhance performance and win international
competitions.
anabolic steroid Class of synthetic hormones related to testosterone
that have both muscle-building (anabolic) and masculinizing
(androgenic) effects; also called anabolic–androgenic steroid.
Synthetic steroid use rapidly spread to other countries and sports,
eventually leading to a ban from track and field and then from many
other sports, enforced by drug testing. Testing policy has led to a cat-
and-mouse game in which new anabolic steroids and new ways of taking
them and masking them are devised to evade detection.
Today, the use of anabolic steroids is about equal among athletes and
nonathletes. More than 1 million people in the United States have used
anabolic steroids not only to enhance athletic performance but also to
enhance physique and appearance. Anabolic steroid use in high schools
may be as high as 7 percent for males and 3 percent for females.
The use of anabolic steroids carries health risks. Their administration
results in the body reducing its manufacture of testosterone, which in
turn reduces male fertility and spermatogenesis. Muscle bulk is increased
and so is aggression. Cardiovascular effects include increased risk of
heart attack and stroke. Liver and kidney function may be compromised,
and the risk of tumors may increase. Male-pattern baldness may be
enhanced. Females may have an enlarged clitoris, acne, increased body
hair, and a deepened voice.
Anabolic steroids have approved clinical uses. Testosterone
replacement is a treatment for hypogonadal males. It is also useful for
treating muscle loss subsequent to trauma and for the recovery of muscle
mass in malnourished people. In females, anabolic steroids are used to
treat endometriosis and fibrocystic disease of the breast.
Tuning In to Language
The continuing search to understand the organization and operation
of the human brain is driven largely by emerging technologies. Over
the past decade, neuroscience researchers have developed dramatic
new noninvasive ways to image the brain’s activity in people who
are awake. One technique, functional near-infrared spectroscopy
(fNIRS), gathers light transmitted through cortical tissue to image
oxygen consumption in the brain. NIRS, a form of optical
tomography, is detailed in Section 7-4 .
functional near-infrared spectroscopy (fNIRs) Noninvasive
technique that gathers light transmitted through cortical tissue to
image oxygen consumption; form of optical tomography.
fNIRS allows investigators to measure oxygen consumption as a
surrogate marker of neuronal activity in relatively select cortical
regions, even in newborn infants. In one study (May et al., 2011),
newborns (0–3 days old) wore a mesh cap containing the NIRS
apparatus, made up of optical fibers, as they listened to a familiar or
unfamiliar language.
“Language and the newborn brain: does prenatal language experience shape
the neonate neural response to speech?” by L. May, K. Byers-Heinlein, J.
Gervain, and J.F. Werker, 2011. Frontiers in Psychology, 2, 1–9. Photo by
Krista Byers-Heinlein
Newborn with probes placed on the head.
“Language and the newborn brain: does prenatal language experience shape
the neonate neural response to speech?” by L. May, K. Byers-Heinlein, J.
Gervain, and J.F. Werker, 2011. Frontiers in Psychology, 2, 1–9. Image by
Judit Gervain.
Probe configurations overlaid on schematics of an infant’s left and
right hemispheres. Red dots indicate light-emitting fibers; blue dots
indicate light detectors. The light detectors in the outer strips in both
hemispheres sit over regions specialized for language in adults.
and Behavior
During a lecture at a meeting of the Anthropological Society of Paris in
1861, Ernest Auburtin, a French physician, argued that language
functions are located in the brain’s frontal lobes. Five days later a fellow
French physician, Paul Broca, observed a brain-injured patient who had
lost his speech and was able to say only “tan” and utter a swear word.
The patient soon died. Broca and Auburtin examined the man’s brain and
found the focus of his injury in the left frontal lobe.
By 1863 Broca had collected eight similar cases and concluded that
speech is located in the third frontal convolution of the left frontal lobe—
a region now called Broca’s area. Broca’s findings attracted others to
study brain–behavior relationships in patients. The field that developed is
what we now call neuropsychology, the study of the relations between
brain function and behavior with a particular emphasis on humans.
Today, measuring brain and behavior increasingly includes noninvasive
imaging, complex neuroanatomical measurement, and sophisticated
behavioral analyses.
neuropsychology Study of the relations between brain function and
behavior, especially in humans.
Section 10-4 explores the anatomy of language and music and
describes Broca’s contributions.
Results
Healthy rats investigate the mismatch object more than the object that
is in context, but the ADX rats performed at chance. The rats in
another ADX group were given treatments known to increase neuron
generation in the hippocampus (enriched housing and exercise in
running wheels). The rats with hippocampal regeneration were not
impaired at the mismatch task.
The confocal photo at right shows a rat hippocampus. A specific
stain was used to identify new neurons, which appear yellow.
Courtesy Bryan Kolb
The block numbers are visible on the examiner’s side of the board but
not on the participant’s side.
Examiner’s view
(A) Corsi block-tapping test
(B) Mirror-drawing task
(C) Test of recent memory
The rat must ignore the room cues and learn that only the cue on the
wall of the pool signals the location of the platform. The platform and
cue are moved on each trial, so the animal is penalized for using room
cues to try to solve the problem.
(C) Landmark-learning task
In one test, rats are trained to reach through a slot to obtain a piece of
sweet food. The movements, which are remarkably similar to the
movements people make in a similar task, can be broken down into
segments. Investigators can score the segments separately, as they are
differentially affected by different types of neurological perturbation.
Experiments in Chapter 14 demonstrate fear conditioning in rats,
plasticity in the monkey’s motor cortex, and neuronal effects of
amphetamine sensitization in rats.
The photo series in Figure 7-4 details how a rat orients its body to the
slot (A), puts its hand through the slot (B), rotates the hand horizontally
to grasp the food (C), then rotates the hand vertically and withdraws it to
obtain the food (D). Contrary to reports common in neurology textbooks,
primates are not the only animals to make fine digit movements, but
because the rat’s hand is small and moves so quickly, digit dexterity can
be seen in rodents only with use of high-speed videography.
Bryan Kolb
(D)
(A)
(B)
(C)
FIGURE 7-4 Skilled Reaching in Rats Movement series
displayed by rats trained to reach through a narrow vertical slot to obtain
sweet food: (A) aim the hand, (B) reach over the food, (C) grasp the food,
(D) withdraw and move food to the mouth.
Brain Lesions
The first—and the simplest—technique used was to ablate (remove or
destroy) tissue. Beginning in the 1920s, Karl Lashley, a pioneer of
neuroscience research, used ablation, and for the next 30 years he tried
to find the site of memory in the brain. He trained monkeys and rats on
various mazes and motor tasks, then removed bits of cerebral cortex with
the goal of producing amnesia for specific memories.
To his chagrin, Lashley failed in his quest. He observed instead that
memory loss was related to the amount of tissue he removed. The only
conclusion Lashley could reach was that memory is distributed
throughout the brain and not located in any single place. Subsequent
research strongly indicates that specific brain functions and associated
memories are indeed localized to specific brain regions. Ironically, just
as Lashley was retiring, William Scoville and Brenda Milner (1957)
described a patient from whose brain Scoville had removed both
hippocampi as a treatment for epilepsy. The surgery rendered this patient
amnesic. During his ablation research, Lashley had never removed the
hippocampi because he had no reason to believe the structures had any
role in memory. And because the hippocampus is not accessible on the
brain’s surface, other techniques had to be developed before subcortical
lesions could be used.
Scoville’s patient, H. M., profiled in Section 14-2 , became the
most-studied case in neuroscience.
The solution to accessing subcortical regions is to use a stereotaxic
apparatus, a device that permits a researcher or a neurosurgeon to target
a specific part of the brain for ablation, as shown in Figure 7-6 . The
head is held in a fixed position, and because the location of brain
structures is fixed in relation to the junction of the skull bones, it is
possible to visualize a three-dimensional brain map.
stereotaxic apparatus Surgical instrument that permits the
researcher to target a specific part of the brain.
Rostral–caudal (front to back) measurements, corresponding to the x -
axis in Figure 7-6 , are made relative to the junction of the frontal and
parietal bones (the bregma ). Dorsal–ventral (top to bottom)
measurements, the y -axis, are made relative to the surface of the brain.
Medial–lateral measurements, the z -axis, are made relative to the
midline junction of the cranial bones. Atlases of the brains of humans
and laboratory animals have been constructed from postmortem tissue so
that the precise location of any structure can be specified in three-
dimensional space.
Review the brain’s anatomical locations and orientations in The
Basics, in Section 2-1 . The brain atlas in Figure 8-14 tracks cortical
thickness over time.
Consider the substantia nigra. To ablate this region to induce a rat to
display symptoms of Parkinson disease, the structure and its three-
dimensional location in the brain atlas is located. A small hole is then
drilled in the skull, as shown in Figure 7-6 , and an electrode lowered to
the substantia nigra. If a current is passed through the electrode, the
tissue in the region of the electrode tip is killed, producing an electrolytic
lesion.
A problem with electrolytic lesions: not only are the neurons of the
targeted tissue killed but so are any nerve fibers passing through the
region (in this case, substantia nigra). One solution is to lower a narrow
metal tube (a cannula) instead of an electrode, infuse a neuron-killing
chemical, and thus produce a neurotoxic lesion. (Figure 7-22 diagrams
this procedure.) A selective toxin can be injected that kills only neurons,
sometimes only certain types of neurons, and spares the fibers.
To make a rat parkinsonian, a toxin can be injected that is selectively
taken up by dopaminergic neurons; this leads to a condition that mimics
human Parkinson pathology. Animals with such neurotoxic lesions have
a variety of motor symptoms including hypokinesia (slowness or
absence of movement), short footsteps, and tremor. Drugs such as L -
dopa, an agonist that enhances dopamine production, and atropine, an
antagonist that blocks acetylcholine production, relieve these symptoms
in Parkinson patients. Ian Whishaw and his colleagues (Schallert et al.,
1978) thus were able to selectively lesion the substantia nigra in rats to
produce a behavioral model of Parkinson disease.
hypokinesia Slowness or absence of movement.
The invasive techniques described so far result in permanent brain
damage. With time, the research subject will show compensation, the
neuroplastic ability to modify behavior from that used prior to the
damage. To avoid compensation following permanent lesions,
researchers have also developed temporary and reversible lesion
techniques such as regional cooling, which prevents synaptic
transmission. A hollow metal coil is placed next to a neural structure;
then chilled fluid is passed through the coil, cooling the brain structure to
about 18°C (Lomber & Payne, 1996). When the chilled fluid is removed
from the coil, the brain structure quickly warms, and synaptic
transmission is restored. Another technique involves local administration
of a GABA agonist, which increases local inhibition and in turn prevents
the brain structure from communicating with other structures.
Degradation of the GABA agonist reverses the local inhibition and
restores function.
compensation Following brain damage, neuroplastic ability to
modify behavior from that used prior to the damage.
Ian Whishaw
Shuffling gait of a parkinsonian rat, captured in prints left by its ink-stained
hind feet.
Brain Stimulation
The brain operates on both electrical and chemical energy, so it is
possible to selectively turn brain regions on or off by using electrical or
chemical stimulation. Wilder Penfield, in the mid-twentieth century, was
the first to use electrical stimulation directly on the human cerebral
cortex during neurosurgery. Later researchers used stereotaxic
instruments to place an electrode or a cannula in specific brain locations.
The objective: enhancing or blocking neuronal activity and observing the
behavioral effects.
Read more about Penfield’s dramatic discoveries in Sections 10-4
and 11-2 .
Perhaps the most dramatic research example comes from stimulating
specific regions of the hypothalamus. Rats with electrodes placed in the
lateral hypothalamus will eat whenever the stimulation is turned on. If
the animals have the opportunity to press a bar that briefly turns on the
current, they quickly learn to press the bar to obtain the current, a
behavior known as electrical self-stimulation. It appears that the
stimulation is affecting a neural circuit that involves both eating and
pleasure.
Figure 12-12 diagrams hypothalamus anatomy. Section 12-3 details
its role in motivated and emotional behavior.
Brain stimulation can also be used as a therapy. When the intact
cortex adjacent to cortex injured by a stroke is stimulated electrically, for
example, it leads to improvement in motor behaviors such as those
illustrated in Figure 7-4 . Cam Teskey and his colleagues (Brown et al.,
2011) successfully restored motor deficits in a rat model of Parkinson
disease by electrically stimulating a specific brain nucleus.
Deep-brain stimulation (DBS) is a neurosurgical technique.
Electrodes implanted in the brain stimulate a targeted area with a low-
voltage electrical current to facilitate behavior. DBS to subcortical
structures—for example, the globus pallidus in the basal ganglia of
Parkinson patients—makes movements smoother. Medications can often
be reduced dramatically. DBS using several neural targets is an approved
treatment for obsessive-compulsive disorder. Experimental trials are
underway to identify the brain regions optimal for DBS to be used as a
treatment for intractable psychiatric disorders such as major depression
(Schlaepfer et al., 2013), schizophrenia, and possibly for epilepsy; also
for stimulating recovery from TBI.
deep-brain stimulation (DBs) Neurosurgical technique: electrodes
implanted in the brain stimulate a targeted area with a low-voltage
electrical current to produce or facilitate behavior.
View DBS in place in Figure 1-6 .
(A)
Marcello Massimini/University of Milan.
Composite MRI and PE T scan from Dr. Tomáš Paus, Rotman Research Institute,
Baycrest Centre for Geriatric Care
(B)
FIGURE 7-7 Transcranial Magnetic Stimulation (A) In
clinical therapy for depression, TMS influences neural activity in a
localized region. (B) Composite photo shows how TMS works.
Activity
The brain is always electrically active, even when we sleep. Electrical
measures of brain activity are important for studying brain function, for
medical diagnosis, and for monitoring the effectiveness of therapies used
to treat brain disorders. The four major techniques for tracking the
brain’s electrical activity are single-cell recording,
electroencephalography (EEG), event-related potentials (ERPs), and
magnetoencephalography (MEG).
In part, these techniques are used to record electrical activity from
different parts of neurons. The electrical behavior of cell bodies and
dendrites, which give rise to graded potentials, tends to be much more
varied and slower than that of axons, which conduct action potentials.
Figure 4-11 diagrams a cell membrane at rest, Figure 4-13 during
graded potentials, and Figure 4-15 generating the action potential.
Among the many practical reasons for using ERPs to study the brain
is the advantage that this EEG technique is noninvasive. Electrodes are
placed on the scalp, not in the brain. Therefore, ERPs can be used to
study humans, including those most frequently used participants: college
students.
Another advantage is cost. Compared to other techniques, such as
brain imaging, EEG and ERP are inexpensive and can be recorded from
many brain areas simultaneously by pasting an array of electrodes
(sometimes more than 200) to different parts of the scalp. Because
certain brain areas respond only to certain sensory stimuli (e.g., auditory
areas respond to sounds and visual areas to sights), relative responses at
different locations can be used to map brain function.
Figure 7-12 shows a multiple-recording method that uses 128
electrodes simultaneously to detect ERPs at many cortical sites.
Computed averaging techniques reduce the masses of information
obtained to simpler comparisons between electrode sites. For example, if
the focus of interest is P3 , a positive wave occurring about 300
milliseconds after the stimulus, the computer can display a graph of the
skull showing only the amplitude of P3 . A computer can also convert the
averages at different sites into a color code, graphically representing the
brain regions most responsive to the signal.
ERPs not only can detect which brain areas are processing particular
stimuli but also can be used to study the order in which different regions
participate. This second use of ERPs is important because we want to
know the route that information takes as it travels through the brain. In
Figure 7-12 , the participant is viewing a picture of a rat that appears
repeatedly in the same place on a computer screen. The P3 recorded from
the posterior right side of the head is larger than any other P3 occurring
elsewhere, meaning that this region is a hot spot for processing the visual
stimulus. Presumably, this particular participant’s right posterior brain is
central in decoding the picture of the rat 300 milliseconds after it is
presented.
Many other interesting research areas benefit from using ERPs, as
described in Clinical Focus 7-2 , Mild Head Injury and Depression.
ERPs also can be used to study how children learn and process
information differently as they mature. ERPs can examine how a person
with a brain injury compensates for the impairment by using undamaged
brain regions. ERPs can even help reveal which brain areas are most
sensitive to aging and therefore contribute most to declining behavioral
functions among the elderly. This simple, inexpensive research tool can
address all these areas.
FIGURE 7-12 Using ERPs to Image Brain Activity
Magnetoencephalography
Passing a magnetic field across a wire induces an electrical current in the
wire. Conversely, current flowing along a wire induces a magnetic field
around the wire. The same is true in the brain. Neural activity, by
generating an electrical field, also produces a magnetic field. Although
the magnetic field produced by a single neuron is vanishingly small, the
field produced by many neurons is sufficiently strong to be recorded on
the scalp. The record of this phenomenon, a magnetoencephalogram
(MEG), is the magnetic counterpart of the EEG or ERP.
magnetoencephalogram (MEG) Magnetic potentials recorded
from detectors placed outside the skull.
Calculations based on MEG measurements not only describe neuronal
groups’ electrical activity but also localize the cell groups generating the
measured field in three dimensions. Magnetic waves conducted through
living tissue undergo less distortion than electrical signals do, so an
MEG can yield a higher resolution than an ERP. A major advantage of
the MEG over the EEG and ERP, then, is MEG’s ability to more
precisely identify the source of the activity being recorded. For example,
MEG has proved useful in locating the source of epileptic discharges.
MEG’s disadvantage is its high cost in comparison with the apparatus
used to produce EEGs and ERPs.
7-2 REVIEW
Measuring the Brain’s Electrical Activity
Before you continue, check your understanding.
1 . The four major techniques for tracking the brain’s electrical activity
are ___________, ___________, ___________, and ___________.
2 . Single-cell recording measures ___________ from a single neuron.
3 . EEG measures ___________ on the cell membrane.
4 . Magnetoencephalography measures the ___________ and also
provides a ___________.
5 . What is the advantage of EEG techniques over MEG?
Answers appear at the back of the book.
and MRI
Until the early 1970s, the only way to actually image the living brain was
by using X-rays that produce static images of brain anatomy from one
angle. The modern era of brain imaging began in the early 1970s, when
Allan Cormack and Godfrey Hounsfield independently developed an X-
ray approach now called computed tomography : the CT scan.
Cormack and Hounsfield both recognized that a narrow X-ray beam
could be passed through the same object at many angles, creating many
images; the images could be combined with the use of computing and
mathematical techniques to produce a three-dimensional image of the
brain.
computed tomography (CT) X-ray technique that produces a static
three-dimensional image of the brain in cross section—a CT scan.
Tomo- comes from the Greek word for section, indicating that
tomography yields a picture through a single brain slice.
The CT method resembles the way in which our two eyes (and our
brain) work in concert to perceive depth and distance to locate an object
in space. The CT scan, however, coordinates many more than two
images, roughly analogous to our walking to several vantage points to
obtain multiple views. X-ray absorption varies with tissue density. High-
density tissue, such as bone, absorbs a lot of radiation. Low-density
material, such as ventricular fluid or blood, absorbs little. Neural tissue
absorption lies between these extremes. CT scanning software translates
these differences in absorption into a brain image in which dark colors
indicate low-density regions and light colors indicate high-density
regions.
Neil Borden/Science Source
(A)
(B)
(C)
FIGURE 7-13 CT Scan and Brain Reconstruction (A) Dorsal
view of a horizontal CT scan of a subject with Broca’s aphasia. The dark
region at the left anterior is the area of the lesion. (B) A schematic
representation of the horizontal section, with the area of the lesion shown
in blue. (C) A reconstruction of the brain, showing a lateral view of the left
hemisphere with the lesion shown in blue. Research from Damasio, H., & Damasio,
A. R. (1989). Lesion analysis in neuropsychology (p. 56). New York: Oxford University Press.
Zephyr/Science Source
Optical Tomography
Research Focus 7-1 , Tuning into Language, describes a brain-imaging
study that used functional near-infrared spectroscopy (fNIRS) to
investigate newborn infants’ responses to language. fNIRS is a form of
optical tomography, a functional imaging technique that operates on the
principle that an object can be reconstructed by gathering light
transmitted through it. One requirement is that the object at least partially
transmit light. Thus, optical tomography can image soft body tissue, such
as that in the breast or the brain.
In fNIRS, reflected infrared light is used to determine blood flow
because oxygen-rich hemoglobin and oxygen-poor hemoglobin differ in
their absorption spectra. By measuring the blood’s light absorption it is
possible to measure the brain’s average oxygen consumption. So fNIRS
and fMRI measure essentially the same thing but with different tools. In
fNIRS, an array of optical transmitter and receiver pairs are fitted across
the scalp, as illustrated in Figure 7-18 A.
The obvious advantage of fNIRS is that it is relatively easy to hook
subjects up repeatedly and record from them for short periods, from
infancy to senescence. The disadvantage is that the light does not
penetrate far into the brain, so researchers are restricted to measuring
cortical activity (Figure 7-18 B). The spatial resolution is also not as
good as with other noninvasive methods, although NIRS equipment now
uses over 100 light detectors on the scalp, which allows acceptable
spatial resolution in the image. NIRS has been used to differentiate
cancerous from noncancerous brain tissue. This advance should lead to
safe, extensive surgical removal of brain cancers and improved outcomes
(Kut et al., 2015).
FIGURE 7-20 Resting State PET images of blood flow obtained while
a single subject rested quietly with eyes closed. Each scan represents a
horizontal brain section, from the dorsal surface (1) to the ventral surface
(31). Development of rs-fMRI suggests that resting-state PET analysis
may emerge.
But PET researchers who are studying the link between blood flow
and mental activity use a subtraction procedure. They subtract the blood
flow pattern when the brain is in a carefully selected control state from
the pattern of blood flow imaged when the subject is engaged in the task
under study, as illustrated in the top row of Figure 7-21 . This
subtraction process images the change in blood flow between the two
states. The change can be averaged across subjects (middle row) to yield
a representative average image difference that reveals which brain areas
are selectively active during the task (bottom). PET does not measure
local neural activity directly; rather, it infers activity on the assumption
that blood flow increases where neuron activity increases.
A significant limitation of PET is that radiochemicals, including so-
called radiopharmaceuticals used in diagnosing human patients, must be
prepared in a cyclotron quite close to the scanner because their half-lives
are so short that transportation time is a severely limiting factor.
Generating these materials is very expensive. But in spite of the expense,
PET has important advantages over other imaging methods:
• PET can detect the decay of literally hundreds of radiochemicals,
which allows the mapping of a wide range of brain changes and
conditions, including changes in pH, glucose, oxygen, amino acids,
neurotransmitters, and proteins.
• PET can detect relative amounts of a given neurotransmitter, the
density of neurotransmitter receptors, and metabolic activities
associated with learning, brain poisoning, and degenerative processes
that might be related to aging.
M. E. Raichle, Mallinckrodt Institute of Radiology, Washington University School
of Medicine
• PET is widely used to study cognitive function with great success. For
example, PET confirms that various brain regions perform differing
functions.
• There are now hybrid scanners for diagnostic imaging and they come in
different combinations in which the imaging modalities combines PET
with CT, PET with MRI and also PET with MRI and EEG. The
advantage of these hybrid scanners is that they can acquire high-quality
anatomical images and then overlay the functional/metabolic image
information, allowing for precise localization that was not available
before, and all within a single examination.
7-4 REVIEW
Functional Brain Imaging
Before you continue, check your understanding.
1 . The principal methods of functional brain imaging are ___________,
___________, and ___________.
2 . PET uses ___________ to measure brain processes and to identify
___________ changes in the brain.
3 . fMRI and optical imaging measure changes in ___________.
4 . Why are resting-state measurements useful to researchers?
Answers appear at the back of the book.
Methods
We have considered a wide range of research methods for manipulating
and measuring brain–behavior interactions. Tables 7-1 and 7-2
summarize these methods, including goals and examples of each method.
How do researchers choose among them all? Their main consideration is
their research question. Ultimately, that question is behavioral, but many
steps lie along the route to understanding behavior.
Some researchers focus on morphology (structure) in postmortem
tissue. This approach allows detailed analysis of both macro and micro
structure, depending on the method chosen. Identifying brain pathology,
as in Parkinson disease, can lead to insights about the causes and nature
of a disorder.
Other investigators focus more on the ways neurons generate electrical
activity in relation to behavior or on functional changes in brain activity
during specific types of cognitive processing. Both approaches are
legitimate: the goal is gaining an understanding of brain–behavior
relationships.
But investigators must consider practical issues, too. Temporal
resolution (how quickly the measurement or image is obtained); spatial
resolution (how accurate localization is in the brain); and the degree of
invasiveness all are pertinent. It is impractical to consider MRI-based
methods for studies of very young children, for example, because
although the images are highly accurate, the participants must remain
absolutely still for long periods.
Similarly, studies of brain-injured patients must take into account
factors such as the subject’s ability to maintain attention for long periods
—during neuropsychological testing or imaging studies, for example.
And practical problems such as motor or language impairment may limit
the types of methods that researchers can use.
Of course, cost is an ever-present practical consideration. Studying
brain and behavior linkages by perturbing the brain are generally less
costly than some imaging methods, many of which require expensive
machinery. EEG, ERP, and fNIRS are noninvasive and relatively
inexpensive to set up (less than $100,000). MRI-based methods, MEG,
and PET are very expensive (more than $2 million) and therefore
typically found only in large research centers or hospitals. Similarly,
epigenetic studies can be very expensive if investigators consider the
entire genome in a large number of biological samples.
TABLE 7-1 Manipulating Brain and Behavior
Method Goal Examples
Record electrical Measure action potentials from individual neurons; Single cell
and magnetic measure graded potentials to assess coordinated recording;
activity (Section activity of thousands of neurons; measure magnetic EEG, ERP;
7-2) fields MEG
Functional brain Measure brain activity as specific behaviors are fMRI; fNIRS;
imaging (Section performed MRS; PET
7-4)
Genetics (Section Determine presence of a gene and its products DNA, RNA,
7-5) protein
analysis
Research
A complete understanding of brain–behavior relationships is limited in
part by the voluntary ethical constraints investigators place upon
experimentation on humans and nonhuman species. Most individual
countries decide independently which experimental practices are
acceptable for humans, for other vertebrates, and for invertebrate species.
In general, the experimental methods acceptable for use on our species
are fewer than those employed on our most closely related primate
relatives. Thus, like most new treatments in medicine, a wide variety of
nonhuman species have been used to develop and test treatments for
human neurological or psychiatric disorders before they are tested on
humans.
Although the human and the nonhuman brain have obvious
differences with respect to language, the general brain organization
across mammalian species is remarkably similar, and the functioning of
basic neural circuits in nonhuman mammals appears to generalize to
humans. Thus, neuroscientists use widely varying animal species to
model human brain diseases as well as to infer typical human brain
functioning.
Two important issues surface in use of animal models to develop
treatments for brain and behavioral disorders. The first is whether
animals actually display neurological diseases in ways similar to
humans. The second surrounds the ethics of using animals in research.
We consider each separately.
Attention-Deficit/Hyperactivity Disorder
Together, attention-deficit/hyperactivity disorder (ADHD) and
attention-deficit disorder (ADD) are probably the most common
disorders of brain and behavior in children, with an incidence of 4
percent to 10 percent of school-aged children. Although it often goes
unrecognized, an estimated 50 percent of children with ADHD still
show symptoms in adulthood, where its behaviors are associated
with family breakups, substance abuse, and driving accidents.
The neurobiological basis of ADHD and ADD is generally
believed to be a dysfunction in the noradrenergic or dopaminergic
activating system, especially in the frontal basal ganglia circuitry.
Psychomotor stimulants such as methylphenidate (Ritalin) and
Adderall (mainly dextroamphetamine) act to increase brain levels of
noradrenaline and dopamine and are widely used for treating ADHD.
About 70 percent of children show improvement of attention and
hyperactivity symptoms with treatment, but there is little evidence
that drugs directly improve academic achievement. This is important
because about 40 percent of children with ADHD fail to get a high-
school diploma, even though many receive special education for
their condition.
After adjusting for age, sex, race, and parental education, Noble and
colleagues associated family income with cortical surface area. Areal
brain regions shown in red were significantly smaller in children
from low-SES families.
Development
Brain and behavior develop apace, and scientists thus reason that the two
are closely linked. Events that alter behavioral development should
similarly alter the brain’s structural development and vice versa. As the
brain develops, neurons become more and more intricately connected,
and these increasingly complex interconnections underlie increasingly
complex behaviors. These observations enable neuroscientists to study
the relation between brain and behavioral development from three
perspectives:
1. Structural development can be correlated with emerging behaviors.
2. Behavioral development can be predicted by the underlying circuitry
that must be emerging.
3. Research can focus on factors such as language, injury, or
socioeconomic status (SES) that influence both brain structure and
behavioral development.
GOAL
Day 15
FIGURE 8-4 From Fertilization to Embryo Development begins
at fertilization (day 1), with the formation of the zygote. On day 2, the
zygote begins to divide. On day 15, the raised embryonic disc begins to
form. Information from K. L. Moore (1998). The developing human: Clinically oriented
embryology (4th ed., p. 61). Philadelphia: Saunders.
Embryo 2 to 8 weeks
Fetus 9 weeks to birth
Prof. P.M. Motta/Science Source
The cells that form the neural tube can be regarded as the nursery for
the rest of the central nervous system. The open region in the tube’s
center remains open and matures into the brain’s ventricles and the spinal
canal. The micrographs in Figure 8-6 show the neural tube closing in a
mouse embryo.
The human body and nervous system change rapidly in the next 3
weeks (Figure 8-7 ). By 7 weeks (49 days), the embryo begins to
resemble a miniature person. The brain looks distinctly human by about
100 days after conception, but it does not begin to form gyri and sulci
until about 7 months. By the end of the ninth month, the fetal brain has
the gross appearance of the adult human brain, but its cellular structure is
different.
Forebrain
Midbrain
Hindbrain
Neural tube (forms spinal cord)
36 weeks
FIGURE 8-7 Prenatal Brain Development The developing
human brain undergoes a series of embryonic and fetal stages. You can
identify the forebrain, midbrain, and hindbrain by color (review Figure 8-3 )
as they develop in the course of gestation. At 6 months, the developing
forebrain has enveloped the midbrain structures. Research from Cowan, W. M.
(1979). The development of the brain. Scientific American, 241(3), p. 116.
FIGURE 8-8 Origin of Brain Cells Cells in the brain begin as
multipotential stem cells, develop into precursor cells, then produce blasts
that finally develop into specialized neurons or glia.
How does a stem cell know to become a neuron rather than a skin cell?
In each cell, certain genes are expressed (turned on) by a signal, and those
genes then produce a particular cell type. Gene expression means that a
formerly dormant gene is activated so that the cell makes a specific
protein. You can easily imagine that certain proteins produce skin cells,
whereas other proteins produce neurons.
The specific signals for gene expression are largely unknown but
probably are chemical, and they form the basis of epigenetics. A common
epigenetic mechanism that suppresses gene expression during
development is gene methylation, or DNA methylation. Here a methyl
group (CH3 ) attaches to the nucleotide base cytosine lying next to
guanine on the DNA sequence. It is relatively simple to quantify gene
methylation in different phenotypes, reflecting either an increase or a
decrease in overall gene expression.
Methylation alters gene expression dramatically during development.
Prenatal stress can reduce gene methylation by 10 percent. This means
that prenatally stressed infants express 2000 more genes (of the more than
20,000 in the human genome) than unstressed infants (Mychasiuk et al.,
2011). Other epigenetic mechanisms, such as histone modification and
mRNA modification, can regulate gene expression, but these mechanisms
are more difficult to quantify.
Thus, the chemical environment of a brain cell is different from that of
a skin cell: different genes in these cells are activated, producing different
proteins and different cell types. The chemical environments needed to
trigger cellular differentiation could be produced by the activity of
neighboring cells or by chemicals, such as hormones, that are transported
in the bloodstream.
2 Cell migration
3 Cell differentiation
Cerebral Cortex Brain weight and body weight increase rapidly and
in proportion. The cortex begins to form about 6 weeks after conception,
with neurogenesis largely complete by 25 weeks. Neural migration and cell
differentiation begin at about 8 weeks and are largely complete by about 29
weeks. Neuron maturation, including axon and dendrite growth, begins at
about 20 weeks and continues until well after birth. Information from M. Marin-
Padilla (1993). Pathogenesis of late-acquired leptomeningeal heterotopias and secondary
cortical alterations: A Golgi study. In A. M. Galaburda (Ed.), Dyslexia and development:
Neurobiological aspects of extraordinary brains.
Cell migration begins shortly after the first neurons are generated and
continues for about 6 weeks in the cerebral cortex (and throughout life in
the hippocampus). Cell differentiation, in which neuroblasts become
specific types of neurons, follows migration. Cell differentiation is
essentially complete at birth, although neuron maturation, which includes
the growth of dendrites, axons, and synapses, goes on for years and in
some parts of the brain may continue throughout adulthood.
The hippocampus (see Figure 2-25 ) is critical to memory (Section
14-3 ) and vulnerable to stress (Section 6-5 ).
The cortex is organized into layers distinctly different from one
another in their cellular makeup. How does this arrangement of
differentiated areas develop? Neuroscientist Pasko Rakic and his
colleagues (e.g., Geschwind & Rakic, 2013) have been finding answers to
this question for more than four decades. Apparently, the subventricular
zone contains a primitive cortical map that predisposes cells formed in a
certain ventricular region to migrate to a certain cortical location. One
subventricular region may produce cells destined to migrate to the visual
cortex; another might produce cells destined to migrate to the frontal
lobes, for example.
How do the migrating cells know where to find these different parts of
the cortex? They follow a path made by radial glial cells. A glial fiber
from each of these path-making cells extends from the subventricular
zone to the cortical surface, as illustrated in Figure 8-10 A . The close-up
views in Figure 8-10 B and C show that neural cells from a given
subventricular region need only follow the glial road to end up in the
correct location.
radial glial cell Path-making cell that a migrating neuron follows to
its appropriate destination.
(A)
(B)
(C)
As the brain grows, the glial fibers stretch but still go to the same
place. Figure 8-10 B also shows a cell moving across the radial glial
fibers. Although most cortical neurons follow the radial glial fibers, a
small number appear to migrate by seeking some type of chemical signal.
Researchers do not yet know why these cells function differently.
Cortical layers develop from the inside out, much like adding layers to
a tennis ball. The neurons of innermost layer VI migrate to their locations
first, followed by those destined for layer V and so on, as successive
waves of neurons pass earlier-arriving neurons to assume progressively
more exterior positions in the cortex. Cortex formation is a bit like
building a house from the ground up until you reach the roof. The
materials needed to build higher floors must pass through lower floors to
get to their destinations.
Figure 2-22 contrasts the sensory and motor cortices’ six distinct
layers and their functions.
To facilitate house construction, each new story has a blueprint-
specified dimension, such as 10 feet high. How do neurons determine
how thick a cortical layer should be? This is a tough question, especially
when you consider that the cortical layers are not all the same thickness.
Local environmental signals—chemicals produced by other cells—
likely influence the way cells form layers in the cortex. These
intercellular signals progressively restrict the choice of traits a cell can
express, as illustrated in Figure 8-11 . Thus, the emergence of distinct
cell types in the brain results not from the unfolding of a specific genetic
program but rather from the interaction of genetic instructions, timing,
and signals from other cells in the local environment.
Uncommitted precursor
Cells with some segregation of determinants
Further segregation of determinants
Intercellular environment
Diverse cells
Neuronal Maturation
After neurons migrate to their destination and differentiate, they begin to
mature by (1) growing dendrites to provide surface area for synapses with
other cells and (2) extending their axons to appropriate targets to initiate
synapse formation.
Two events take place in dendrite development: dendritic arborization
(branching) and the growth of dendritic spines. As illustrated in Figure 8-
12 , dendrites in newborn babies begin as individual processes protruding
from the cell body. In the first 2 years of life, dendrites develop
increasingly complex extensions that look much like leafless tree
branches: they undergo arborization. The dendritic branches then begin to
form spines, where most synapses on dendrites are located.
Although dendritic development begins prenatally in humans, it
continues for a long time after birth, as Figure 8-12 shows. Dendritic
growth proceeds at a slow rate, on the order of microns (µm, millionths of
a meter) per day. Contrast this with the development of axons, which
grow on the order of a millimeter per day, about a thousand times faster.
Age (months)
FIGURE 8-12 Neuronal Maturation in Cortical Language
Areas In postnatal cortical differentiation—shown here around Broca’s
area, which controls speaking—neurons begin with simple dendritic fields
that become progressively more complex until a child reaches about 2
years old. Brain maturation thus parallels a behavioral development: the
emergence of language. Figure from Biological Foundations of Language (pp. 160–161),
by E. Lenneberg, 1967, New York: Wiley.
(B)
FIGURE 8-13 Seeking a Path (A) At the tip of this axon, nurtured in a
culture, a growth cone sends out filopodia seeking specific molecules to
guide the axon’s direction of growth. (B) Filopodia guide the growth cone
toward a target cell that is releasing cell adhesion or tropic molecules,
represented in the drawing by red dots.
Axon-appropriate connections may be millimeters or even a meter
away in the developing brain, and the axon must find its way through
complex cellular terrain to make them. Axon connections present a
significant engineering problem for the developing brain. Such a task
could not possibly be specified in a rigid genetic program. Rather,
genetic–environmental interaction is at work again, as various molecules
that attract or repel the approaching axon tip guide the formation of
axonic connections.
Santiago Ramón y Cajal was the first scientist to describe this
developmental process a century ago. He called the growing tips of
axons growth cones. Figure 8-13 A shows that as growth cones extend,
they send out shoots, analogous to fingers reaching out to find a pen on a
cluttered desk. When one shoot, a filopod (pl. filopodia ), reaches an
appropriate target, the others follow.
growth cone Growing tip of an axon.
filopod (pl. filopodia) Process at the end of a developing axon that
reaches out to search for a potential target or to sample the
intercellular environment.
Growth cones are responsive to cues from two types of molecules
(Figure 8-13 B):
1. Cell adhesion molecules (CAMs) are cell-manufactured molecules
that either lie on the target cell’s surface or are secreted into the
intercellular space. Some provide a surface to which growth cones can
adhere, hence the name cellular adhesion molecule; others serve to
attract or repel growth cones.
cell adhesion molecule (CAM) A chemical molecule to which
specific cells can adhere, thus aiding in migration.
2. Tropic molecules, produced by the targets the axons’ growth cones
are seeking (- tropic means moving toward ; pronounced as trope, not
tropical ), essentially tell growth cones to come on over here. They
likely also tell other growth cones seeking different targets to keep
away.
tropic molecule Signaling molecule that attracts or repels growth
cones.
Do not confuse tropic (guiding) molecules with the trophic
(nourishing) molecules, discussed earlier, which support neuronal
growth.
Although Ramón y Cajal predicted them more than 100 years ago,
tropic molecules have proved difficult to find. So far, only one group,
netrins (from Sanskrit for to guide ), has been identified in the brain.
Given the enormous number of brain connections and the great
complexity in wiring them, many other types of tropic molecules
undoubtedly will be found.
netrin Member of the only class of tropic molecules yet isolated.
Synaptic Development
The number of synapses in the human cerebral cortex is staggering, on
the order of 1014 , or 100,000 trillion. A genetic program that assigns
each synapse a specific location could not possibly determine each spot
for this huge number. As with all stages of brain development, only the
general outlines of neuronal connections in the brain are likely to be
genetically predetermined. The vast array of specific synaptic contacts is
then guided into place by a variety of local environmental cues and
signals.
A human fetus displays simple synaptic contacts in the fifth
gestational month. By the seventh gestational month, synaptic
development on the deepest cortical neurons is extensive. After birth,
synapse numbers increase rapidly. In the visual cortex, synaptic density
almost doubles between ages 2 months and 4 months and then continues
to increase until age 1 year.
Cell Death and Synaptic Pruning
To carve statues, sculptors begin with blocks of stone and chisel away
the unwanted pieces. The brain does something similar during cell death
and synaptic pruning. The chisel in the brain could be a genetic signal,
experience, reproductive hormones, stress, even SES. The effect of these
chisels can be seen in changes in cortical thickness over time, as
illustrated in Figure 8-14 , an atlas of brain images. The cortex actually
becomes measurably thinner in a caudal–rostral (back-to-front) gradient,
a process that is probably due both to synaptic pruning and to white
matter expansion. This expansion stretches the cortex, leading to
increased surface area, as illustrated in Research Focus 8-1 .
Gray-matter volume
The graph in Figure 8-15 plots this rise and fall in synaptic density.
Pasko Rakic (1974) estimated that at the peak of synapse loss, a person
may lose as many as 100,000 per second. Synapse elimination is
extensive. Peter Huttenlocher (1994) estimated it at 42 percent in the
human cortex. We can only wonder what the behavioral consequence of
this rapid synaptic loss might be. It is probably no coincidence that
children, especially toddlers and adolescents, seem to change moods and
behaviors quickly.
How does the brain eliminate excess neurons? The simplest
explanation is competition, sometimes referred to as neural Darwinism.
Charles Darwin believed that one key to evolution is the variation it
produces in the traits possessed by a species. Those whose traits are best
suited to the local environment are most likely to survive. From a
Darwinian perspective, then, more animals are born than can survive to
adulthood, and environmental pressures weed out the less-fit ones.
Similar pressures cause neural Darwinism.
neural Darwinism Hypothesis that the processes of cell death and
synaptic pruning are, like natural selection in species, the outcome
of competition among neurons for connections and metabolic
resources in a neural environment.
What exactly causes this cellular weeding out in the brain? It turns out
that when neurons form synapses, they become somewhat dependent on
their targets for survival. In fact, deprived of synaptic targets, neurons
eventually die. They die because target cells produce neurotrophic
(nourishing) factors absorbed by the axon terminals that function to
regulate neuronal survival. Nerve growth factor (NGF), for example, is
made by cortical cells and absorbed by cholinergic neurons in the basal
forebrain.
If many neurons compete for a limited amount of a neurotrophic
factor, only some can survive. The death of neurons deprived of a
neurotrophic factor is different from the cell death caused by injury or
disease. When neurons are deprived of a neurotrophic factor, certain
genes seem to be expressed, resulting in a message for the cell to die.
This programmed process is called apoptosis.
apoptosis Genetically programmed cell death.
Apoptosis accounts for the death of overabundant neurons, but it does
not account for the synaptic pruning from cells that survive. In 1976,
French neurobiologist Jean-Pierre Changeux proposed a theory for
synapse loss that also is based on competition (Changeux & Danchin,
1976). According to Changeux, synapses persist into adulthood only if
they have become members of functional neural networks. If not, they
are eventually eliminated from the brain. We can speculate that
environmental factors such as hormones, drugs, and experience would
influence active neural circuit formation and thus influence synapse
stabilization and pruning.
In addition to outright errors in synapse formation that give rise to
synaptic pruning, subtler changes in neural circuits may trigger the same
process. One such change accounts for the findings of Janet Werker and
Richard Tees (1992), who studied the ability of infants to discriminate
speech sounds taken from widely disparate languages, such as English,
Hindi (from India), and Salish (a Native American language). Their
results show that young infants can discriminate speech sounds of
different languages without previous experience, but their ability to do so
declines in the first year of life. An explanation for this declining ability
is that synapses encoding speech sounds not typically encountered in an
infant’s daily environment are not active simultaneously with other
speech-related synapses. As a result, they are eliminated.
Synaptic pruning may also allow the brain to adapt more flexibly to
environmental demands. Human culture is probably the most diverse and
complex environment with which any animal must cope. Perhaps the
flexibility in cortical organization achieved by the mechanism of
selective synaptic pruning is a necessary precondition for successful
development in a cultural environment.
Synaptic pruning may also be a precursor related to different
perceptions that people develop about the world. Consider, for example,
the obvious differences in Eastern and Western philosophies about life,
religion, and culture. Given the cultural differences to which people in
the East and West are exposed as their brain develops, imagine how
different their individual perceptions and cognitions may be. Considered
together as a species, however, we humans are far more alike than we are
different.
An important and unique characteristic common to all humans is
language. As illustrated in Figure 8-14 , the cortex generally thins from
age 5 to 20. The sole exception: major language regions of the cortex
actually show an increase in gray matter. Figure 8-16 contrasts the
thinning of other cortical regions with the thickening of language-related
regions (O’Hare & Sowell, 2008). A different pattern of development for
brain regions critical in language processing makes sense, given
language’s unique role in cognition and the long learning time.
Unique Aspects of Frontal Lobe
Development
The imaging atlas in Figure 8-14 confirms that the frontal lobe is the last
brain region to mature. Since the atlas was compiled, neuroscientists
have confirmed that frontal lobe maturation extends far beyond its age
20 boundary, including in the dorsolateral prefrontal cortex. The
DLPFC, which comprises Brodmann areas 9 and 46, makes reciprocal
connections with posterior parietal cortex and the superior temporal
sulcus: it selects behavior and movement with respect to temporal
memory. Zdravko Petanjek and colleagues (2011) analyzed synaptic
spine density in the DLPFC in a large sample of human brains ranging in
age at death from newborn to age 91 years.
dorsolateral prefrontal cortex (DLPFC) Brodmann areas 9 and
46; makes reciprocal connections with posterior parietal cortex and
the superior temporal sulcus; responsible for selecting behavior and
movement with respect to temporal memory.
Three-dimensional atlases guide researchers to brain regions’
precise locations (Section 7-1 ). More in Sections 12-4 and 15-3 on
how the DLPFC functions.
The analysis confirms that dendritic spine density, a good measure of
the number of excitatory synapses, is two to three times greater in
children than in adults and that spine density begins to decrease during
puberty. The analysis also shows that dendritic spines continue to be
eliminated well beyond age 20, stabilizing at the adult level around age
30. Two important correlates attend slow frontal lobe development:
1. The frontal lobe is especially sensitive to epigenetic influences (Kolb
et al., 2012). In a study of more than 170,000 people, Robert Anda and
colleagues (Anda et al., 2006) show that such aversive childhood
experiences (ACEs) as verbal or physical abuse, a family member’s
addiction, or loss of a parent are predictive of physical and mental
health in middle age. People with two or more ACEs, for example, are
50 times more likely to acquire addictions or attempt suicide. Women
with two or more ACEs are 5 times more likely to have been sexually
assaulted by age 50. We hypothesize that early aversive experiences,
such as sexual assault, promote these ACE-related susceptibilities by
compromising frontal lobe development. Abnormal frontal lobe
development would make a person less likely to judge such a situation
as dangerous.
You can view and answer the ACE questionnaire at
http://www.theannainstitute.org/Finding Your ACE Score.pdf
2. The trajectory of frontal lobe development correlates with adult
intelligence. Philip Shaw and his colleagues (2006) used a longitudinal
design, administering multiple structural MRIs to participants over
time. The results show that it is not the thickness of the frontal cortex
in adulthood that predicts IQ score but rather the change in trajectory
of cortical thickness ( Figure 8-17 ). Children who score highest in
intelligence show the greatest plastic changes in the frontal lobe over
time. These changes are likely to reflect strong epigenetic influences.
Glial Development
Astrocytes and oligodendrocytes begin to develop after most
neurogenesis is complete and continue to develop throughout life. CNS
axons can function before they are myelinated by oligodendria, but
healthy adult function is attained only after myelination is complete.
Consequently, myelination is a useful rough index of cerebral
maturation.
Astrocytes nourish and support neurons; oligodendroglia form
myelin in the CNS (see Table 3-1 ).
In the early 1920s, Paul Flechsig noticed that cortical myelination
begins just after birth and continues until at least 18 years of age. He also
noticed that some cortical regions were myelinated by age 3 to 4 years,
whereas others showed virtually no myelination at that time. Figure 8-18
shows one of Flechsig’s cortical maps with areas shaded according to
earlier or later myelination.
Flechsig hypothesized that the earliest-myelinating areas control
simple movements or sensory analyses, whereas the latest-myelinating
areas control the highest mental functions. MRI analyses of myelin
development in the cortex show that white matter thickness largely does
correspond to the progress of myelination, confirming Flechsig’s ideas.
Myelination continues until at least 20 years of age, as illustrated in
Figure 8-19 , which contrasts total brain volume, gray matter volume,
and white matter volume during brain development in females and
males.
FIGURE 8-17 Frontal Lobe Development and IQ Score
The trajectory of frontal lobe development from ages 7 to 16 years
correlates with dynamic changes in cortical thickness. Colors on the scans
scale to the magnitude of differences between individuals with average
and superior intelligence. Purple shows thinner cortex in the individuals
with higher IQ scores; red, yellow, and green show progressively
increasing cortical thickness in those individuals. At age 7, they have a
thinner frontal cortex that rapidly thickens to peak at age 13, then wanes
later in adolescence. P. Shaw, D. Greenstein, J. Lerch, L. Clasen, R. Lenroot et al.
Intellectual ability and cortical development in children and adolescents, Nature, 440, pp. 676–
679 Mar 30, 2006, permission conveyed through Copyright Clearance Center, Inc.
FIGURE 8-18 Progress of Myelination The fact that the light-
colored zones are very late to myelinate led Flechsig to propose that they
are qualitatively different in function from those that mature earlier.
FIGURE 8-19 Sex Differences in Brain Development Mean brain volume
by age in years for males (green) and females (orange). Arrows above the curves
indicate that females show more rapid growth than males, reaching maximum overall
volume (A) and gray matter volume (B) sooner. Decreasing gray matter corresponds to
cell and synaptic loss. Increasing white matter volume (c) largely corresponds to myelin
development. Information from R. K. Lenroot, N. Gogtay, D. K. Greenstein, E. M. Wells, G. L. Wallace, L. S.
Clasen, et al. (2007). Sexual dimorphism of brain development trajectories during childhood and adolescence.
NeuroImage, 36, 1065–1073.
8-2 REVIEW
Neurobiology of Development
Before you continue, check your understanding.
1 . The central nervous system begins as a sheet of cells, which folds
inward to form the ___________.
2 . The growth of neurons is referred to as ___________, whereas
formation of glial cells is known as ___________.
3 . Growth cones are responsive to two types of cues: ___________ and
___________.
4 . The adolescent period is characterized by two ongoing processes of
brain maturation: ___________ and ___________.
5 . What is the functional significance of the prolonged development of
the frontal lobe?
Answers appear at the back of the book.
Neural Maturation
As brain areas mature, a person’s behaviors correspond to the functions of
the maturing areas. Stated differently, behaviors cannot emerge until the
requisite neural machinery has developed. When that machinery is in
place, however, related behaviors develop quickly through stages and are
shaped significantly by epigenetic factors.
Researchers have studied these interacting changes in the brain and
behavior, especially in regard to the emergence of motor skills, language,
and problem solving in children. We now explore development in these
three areas.
Motor Behaviors
Developing locomotion skills are easy to observe in human infants. At
first, babies cannot move about independently, but eventually, they roll
over, then crawl, then walk.
Other motor skills develop in less obvious but no less systematic ways.
Shortly after birth, infants are capable of flexing their arms in such a way
that they can scoop something toward their body, and they can direct a
hand, as toward a breast when suckling. Between 1 and 3 months of age,
babies also begin to make spontaneous hand and digit movements
consisting of almost all the skilled finger movements they will make as an
adult, a kind of motor babbling.
These movements at first are directed toward handling parts of their
body and their clothes (Wallace & Whishaw, 2003). Only then are
reaching movements directed toward objects in space. Tom Twitchell
(1965) studied and described how the ability to reach for objects and
grasp them progresses in stages, illustrated in Figure 8-20 .
Between 8 and 11 months, infants’ grasping becomes more
sophisticated as the pincer grasp, employing the index finger and the
thumb, develops. The pincer grasp is significant developmentally: it
allows babies to make the very precise finger movements needed to
manipulate small objects. What we see then, is a sequence in the
development of grasping: first scooping, then grasping with all of the
fingers, then grasping with independent finger movements.
If increasingly well-coordinated grasping depends on the emergence of
certain neural machinery, anatomical changes in the brain should
accompany the emergence of these motor behaviors. Such changes do
take place, especially in the development of dendritic arborizations and in
fiber connections between neocortex and spinal cord. And a correlation
has been found between myelin formation and the ability to grasp
(Yakovlev & Lecours, 1967).
A classic symptom of motor cortex damage, detailed in Section 11-1
, is permanent loss of the pincer grasp.
In particular, a group of axons from motor cortex neurons myelinate at
about the same time that whole-hand reaching and grasping develop.
Another group of motor cortex neurons known to control finger
movements myelinates at about the time that the pincer grasp develops.
MRI studies of changes in cortical thickness show that increased motor
dexterity is associated with decreased cortical thickness in the hand
region of the left motor cortex of right-handers (Figure 8-21 A ).
We can now make a simple prediction. If specific motor cortex
neurons are essential for adultlike grasping movements to emerge,
removing those neurons should make an adult’s grasping ability similar to
a young infant’s, which is in fact what happens.
2 months
4 months
10 months
FIGURE 8-20 Development of the Grasping Response of
Infants Information from T. E. Twitchell (1965). The automatic grasping response of
infants. Neuropsychologia, 3, p. 251.
Language Development
The gradual series of developments that accompanies speech acquisition
has usually progressed significantly by age 3 or 4. According to Eric
Lenneberg (1967), children reach certain important speech milestones in
a fixed sequence and at constant chronological ages. Children start to
form a vocabulary by 12 months. This 5-to-10-word repertoire typically
doubles over the next 6 months. By 2 years, vocabulary will range from
200 to 300 words that include mostly everyday objects. In another year,
vocabulary approaches 1000 words and begins to include simple
sentences. At 6 years, children boast a vocabulary of about 2500 words
and can understand more than 20,000 words en route to an adult
vocabulary of more than 50,000 words.
Although language skills and motor skills generally develop in
parallel, the capacity for language depends on more than the ability to
make controlled movements of the mouth, lips, and tongue. Precise
movements of the muscles controlling these body parts develop well
before children can speak. Furthermore, even when children have
sufficient motor skill to articulate most words, their vocabulary does not
rocket ahead but rather progresses gradually.
A small proportion of children (about 1 percent) have typical
intelligence and motor skill development, yet their speech acquisition is
markedly delayed. Such children may not begin to speak in phrases until
after age 4, despite an apparently healthy environment and the absence of
any obvious neurological signs of brain damage. Because the timing of
speech onset appears universal in the remaining 99 percent of children
across all cultures, something different has likely taken place in the brain
maturation of a child with late language acquisition. Specifying what that
difference is—that is hard.
(A)
(B)
(C)
Correlations Between Gray Matter
FIGURE 8-21
Environment
Developing behaviors are shaped not only by the maturation of brain
structures but also by each person’s environment and experience.
Neuroplasticity suggests that the brain is pliable and can be molded, at
least at the microscopic level. Brains exposed to different environmental
experiences are molded in different ways. Culture is an important aspect
of the human environment, so culture must help to mold the human
brain. We would therefore expect people raised in widely differing
cultures to acquire brain structure differences that have lifelong effects
on their behavior.
Section 1-5 summarizes humanity’s acquisition of culture.
The brain is plastic in response not only to external events but also to
events within a person’s body, including the effects of hormones, injury,
and genetic mutations. The developing brain early in life is especially
responsive to these internal factors, which in turn alter how the brain
responds to external experiences. In this section, we explore a whole
range of external and internal environmental influences on brain
development. We start with a question: Exactly how does experience
alter brain structure?
The idea that early experience can change later behavior seems
sensible enough, but we are left to question why experience should make
such a difference. One reason is that experience changes neuronal
structure, which is especially evident in the cortex. Neurons in the brains
of animals raised in complex environments, such as that shown in Figure
8-24 A , are larger and richer in synapses than are those of animals
reared in barren cages (Figure 8-24 B). Similarly, 3 weeks of tactile
stimulation increases synapse numbers all over the cortex in adulthood.
Presumably, increased synapse numbers result from increased sensory
processing in a complex and stimulating environment. The brains of
animals raised in complex settings also display more (and larger)
astrocytes. Although complex-rearing studies do not address the effects
of human culture directly, making predictions about human development
on the basis of their findings is easy. We know that experience can
modify the brain, so we can predict that different experiences might
modify the brain differently. Take language development, for example,
as Research Focus 8-3 , Increased Cortical Activation for Second
Languages, on page 268 , explains.
Focus 5-5 describes some structural changes neurons undergo as a
result of learning.
Like early exposure to language during development, early exposure
to music alters the brain. Perfect (absolute) pitch, or the ability to re-
create a musical note without external reference is believed to require
musical training during an early period, when brain development is most
sensitive to this experience. Similarly, adults exposed only to Western
music since childhood usually find Eastern music peculiar, even
nonmusical, on first encountering it. Both examples demonstrate that
early exposure to music alters neurons in the auditory system (see
Levitin & Rogers, 2005).
Figure 15-11 shows enhanced nerve tract connectivity in people
with perfect pitch.
Such loss of plasticity does not mean that the adult human brain
grows fixed and unchangeable. Adults’ brains are influenced by
exposure to new environments and experiences, although more slowly
and less extensively than children’s brains are. In fact, evidence reveals
that experience affects the brain well into old age: good news for those
of us who are no longer children.
It is becoming clear as well that prenatal events can modify brain
development. The consensus is that perinatal adversity, such as
gestational stress at or near birth, is a significant risk factor for later
behavioral disorders (see Bock et al., 2014). Even events such as stress
or drug use that occur before conception can lead to epigenetic effects in
offspring. Examples include abnormalities in neural organization and
behavior (e.g., Harker et al., 2015). Although such effects are usually
presumed to come from maternal exposure before conception, increasing
evidence from research on humans points to paternal preconception
experience also modifying children’s brain development, perhaps
including acquisition of fetal alcohol spectrum disorder (FASD).
Focus 6-2 details FASD.
RESEARCH FOCUS 8-3
Romanian Orphans
In the 1970s, Romania’s Communist regime outlawed all forms of
birth control and abortion. The natural result was more than 100,000
unwanted children in state-run orphanages. The conditions were
appalling.
The children were housed and clothed but given virtually no
environmental stimulation. Mostly they were confined to cots with
few, if any, playthings and virtually no personal interaction with
overworked caregivers, who looked after 20 to 25 children at once.
Bathing often consisted of being hosed down with cold water.
After the Communist government fell, the outside world
intervened. Hundreds of these children were placed in adoptive
homes throughout the world, especially in the United States, Canada,
and the United Kingdom. Studies of these severely deprived children
on arrival in their new homes document malnourishment, chronic
respiratory and intestinal infections, and severe developmental
impairments.
A British study by Michael Rutter (1998) and his colleagues
assessed the orphans at two standard deviations below age-matched
children for weight, height, and head circumference (taken as a very
rough measure of brain size). Scales of motor and cognitive
development assessed most of the children in the impaired range.
The improvement these children showed in the first 2 years after
placement in their adoptive homes was nothing short of spectacular.
Average height and weight advanced to nearly normal, although head
circumference remained below normal. Many tested in the normal
range of motor and cognitive development. But a significant number
were still considered intellectually impaired. What caused these
individual differences in recovery from the past deprivation?
The key factor was age at adoption. Children adopted before 6
months of age did significantly better than those adopted later. In a
Canadian study by Elenor Ames (1997), Romanian orphans who were
adopted before 4 months of age and then tested at age 4½ had an
average Stanford–Binet IQ score of 98. Age-matched Canadian
controls had an average score of 109. Brain imaging studies showed
that children adopted at an older age had a smaller brain than normal.
Charles Nelson and his colleagues (Berens & Nelson, 2015;
Nelson et al., 2007; Smyke et al., 2012) analyzed cognitive and social
development as well as event-related potential (ERP) measures in a
group of children who had remained in Romania. Whether the
children had moved to foster homes or remained in institutions, the
studies reveal severe abnormalities at about 4 years of age. The age at
adoption was again important, but in the Nelson studies the critical
age appears to be before 24 months rather than 6 months, as in the
earlier studies.
The inescapable conclusion is that the human brain may be able to
recover from a brief period of extreme deprivation in early infancy,
but periods longer than 24 months produce significant developmental
abnormalities that cannot be overcome completely. The studies of
Romanian orphans make clear that the developing brain requires
stimulation for healthy development. Although the brain may be able
to catch up after a brief deprivation, severe deprivation lasting many
months results in a small brain and associated behavioral
abnormalities, especially in cognitive and social skills.
Indifferent stage
FIGURE 8-28 Sexual Differentiation in the Human Infant
Early in the indifferent stage, male and female human embryos are identical
(top). In the absence of testosterone, female structures emerge (left). In
response to testosterone, genitalia begin to develop into male structures at
about 60 days (right). Parallel changes take place in the embryonic brain in
response to the absence or presence of testosterone.
Medial view
FIGURE 8-29 Sex Differences in Brain Volume Cerebral areas
related to sex differences in the distribution of estrogen (orange) and
androgen (green) receptors in the developing brain correspond to areas of
relatively larger cerebral volumes in adult women and men. Information from J.
M. Goldstein, L. J. Seidman, N. J. Horton, N. Makris, D. N. Kennedy et al. (2001). Normal sexual
dimorphism of the adult human brain assessed by in vivo magnetic resonance imaging. Cerebral
Cortex, 11, 490–497.
Schizophrenia
When Mrs. T. was 16 years old, she began to experience her first symptom of
schizophrenia: a profound feeling that people were staring at her. These bouts of self-
consciousness soon forced her to end her public piano performances. Her self-
consciousness led to withdrawal, then to fearful delusions that others were speaking about
her behind her back, and finally to suspicions that they were plotting to harm her.
At first Mrs. T.’s illness was intermittent, and the return of her intelligence, warmth,
and ambition between episodes allowed her to complete several years of college, to
marry, and to rear three children. She had to enter a hospital for her illness for the first
time at age 28, after the birth of her third child, when she began to hallucinate.
Now, at 45, Mrs. T. is never entirely well. She has seen dinosaurs on the street and live
animals in her refrigerator. While hallucinating, she speaks and writes in an incoherent,
but almost poetic way. At other times, she is more lucid, but even then the voices she
hears sometimes lead her to do dangerous things, such as driving very fast down the
highway in the middle of the night, dressed only in a nightgown…. At other times and
without any apparent stimulus, Mrs. T. has bizarre visual hallucinations. For example, she
saw cherubs in the grocery store. These experiences leave her preoccupied, confused, and
frightened, unable to perform such everyday tasks as cooking or playing the piano.
(Gershon & Rieder, 1992, p. 127)
(B)
Pyramidal cell orientation in the hippocampus of (A) a healthy
brain and (B) a schizophrenic brain. Research from J. A. Kovelman and A.
B. Scheibel (1984.) A neurohistologic correlate of schizophrenia. Biological Psychiatry, 19,
p. 1613.
Developmental Disability
Impaired cognitive functioning accompanies abnormal brain
development. Impairment may range from mild, allowing an almost
normal lifestyle, to severe, requiring constant care. As summarized in
Table 8-3 , such developmental disability can result from chronic
malnutrition, genetic abnormalities such as Down syndrome, hormonal
abnormalities, brain injury, or neurological disease. Different causes
produce different abnormalities in brain organization, but the critical
similarity across all types of developmental disability is that the brain is
not normal.
Figure 3-22 illustrates trisomy, the chromosomal abnormality that
causes Down syndrome.
Dominique Purpura (1974) conducted one of the few systematic
investigations of developmentally disabled children’s brains. Purpura
used Golgi stain to examine the neurons of children who had died of
accident or disease unrelated to the nervous system. When he examined
the brains of children with various forms of intellectual disability, he
found that dendrite growth was stunted and the spines very sparse relative
to dendrites from children of typical intelligence, as illustrated in Figure
8-33 .
TABLE 8-3 Causes of Developmental Disability
Cause Example mechanism Example condition
Typical child
Developmentally disabled child
FIGURE 8-33 Neuronal Contrast Representative dendritic branches from cortical
neurons in a child of typical intelligence (left) and a developmentally disabled child (right),
whose neurons are thinner and have far fewer spines. Information from D. P. Purpura (1974).
Dendritic spine “dysgenesis” and mental retardation. Science, 186, p. 1127.
8-4 REVIEW
Brain Development and the Environment
Before you continue, check your understanding.
1 . The idea that specific molecules in different cells in various midbrain
regions give each cell a distinctive chemical identity is known as the
__________.
2 . Subnormal visual stimulation to one eye during early development
can lead to a loss of acuity, known as __________.
3 . The hormone __________ masculinizes the brain during
development.
4 . The brain’s sensitivity to experience is highest during ___________.
5 . Why do so many mental disorders appear during adolescence?
Answers appear at the back of the book.
Brain?
When we consider the brain’s complexity, the less-than-precise process
of brain development, and the myriad factors—from SES to gut bacteria
—that can influence development, we are left to marvel at how so many
of us end up with brains that pass for normal. We all must have had
neurons that migrated to wrong locations, made incorrect connections,
were exposed to viruses or other harmful substances. If the brain were as
fragile as it might seem, to end up with a normal brain would be almost
impossible.
Apparently, animals have evolved a substantial capacity to repair
minor abnormalities in brain development. Most people have developed
in the range that we call normal because the human brain’s plasticity and
regenerative powers overcome minor developmental deviations. By
initially overproducing neurons and synapses, the brain gains the
capacity to correct errors that might have arisen accidentally.
These same plastic properties later allow us to cope with the ravages
of aging. Neurons are dying throughout our lifetime. By age 60,
investigators ought to see significant effects from all this cell loss,
especially considering the cumulative results of exposure to
environmental toxins, drugs, traumatic brain injuries, and other neural
insults. But this is not what happens.
Although some teenagers may not believe it, relatively few 60-year-
olds are demented. By most criteria, the 60-year-old who has been
intellectually active throughout adulthood is likely to be much wiser than
the 18-year-old whose brain has lost relatively few neurons. A 60-year-
old chess player will have a record of many more chess matches from
which to draw game strategies than does an 18-year-old, for example.
Clearly, some mechanism must enable us to compensate for loss and
minor injury to our brain cells. This capacity for plasticity and change,
for learning and adapting, is arguably the most important characteristic
of the human brain during development and throughout life.
We return to learning, memory, and neuroplasticity in Chapter 14 .
SUMMARY
8-1 Three Perspectives on Brain Development
Nervous system development entails more than the unfolding of a
genetic blueprint. Development is a complex dance of genetic and
environmental events that interact to sculpt the brain to fit within a
particular cultural and environmental context. We can approach this
dance from three perspectives: (1) correlating emerging brain structures
with emerging behaviors, (2) correlating new behaviors with neural
maturation, and (3) identifying influences on brain and behavior.
8-2 Neurobiology of Development
Human brain maturation is a long process, lasting as late as age 30.
Neurons, the units of brain function, develop a phenotype, migrate, and,
as their processes elaborate, establish connections with other neurons
even before birth. The developing brain produces many more neurons
and connections than it needs and then prunes back in toddlerhood and
again in adolescence and early adulthood to a stable level maintained by
some neurogenesis throughout the lifespan. Experiences throughout
development can trigger epigenetic mechanisms, such as gene
methylation, that alter gene expression.
8-3 Using Emerging Behaviors to Infer Neural
Maturation
Throughout the world, across the cultural spectrum, from newborn to
adult, we all develop through similar behavioral stages. As infants
develop physically, motor behaviors emerge in a predictable sequence
from gross, poorly directed movements toward objects to controlled
pincer grasps to pick up objects as small as pencils by about 11 months.
Cognitive behaviors also develop through stages of logic and problem
solving. Beginning with Jean Piaget, researchers have identified and
characterized four or more distinct stages of cognitive development.
Each stage can be identified by specific behavioral tests.
Behaviors emerge as the neural systems that produce them develop.
Matching the median timetables of neurodevelopment with observed
behavior infers the hierarchical relation between brain structure and
brain function. Motor behaviors emerge in synchrony with maturating
motor circuits in the cerebral cortex, basal ganglia, and cerebellum, as
well as in the connections from these areas to the spinal cord. Similar
correlations between emerging behaviors and neuronal development
accompany the maturation of cognitive behavior as neural circuits in the
frontal and temporal lobes mature in early adulthood.
8-4 Brain Development and the Environment
The brain is most plastic during its development, and neuronal structures
and their connections can be molded by various factors throughout
development. The brain’s sensitivity to factors such as external events,
quality of environment, tactile stimulation, drugs, gonadal hormones,
stress, and injury varies over time. At critical periods in the course of
development, beginning prenatally, different brain regions are
particularly sensitive to different events.
Brain perturbations in the course of development from, say, anoxia,
trauma, or toxins can alter brain development significantly; can result in
severe behavioral abnormalities, including intellectual disability; and
may be related to such disorders as ASD or SIDS. Other behavioral
disorders emerge in adolescence, a time of prolonged frontal lobe
change.
8-5 How Do Any of Us Develop a Normal Brain?
The brain has a substantial capacity to repair or correct minor
abnormalities, allowing most people to develop normal behavioral
repertoires and to maintain brain function throughout life.
KEY TERMS
amblyopia, p. 269
androgen, p. 273
anencephaly, p. 277
apoptosis, p. 256
autism spectrum disorder (ASD), p. 255
cell adhesion molecule (CAM), p. 256
chemoaffinity hypothesis, p. 269
critical period, p. 269
dorsolateral prefrontal cortex (DLPFC), p. 259
estrogens, p. 273
filopod (pl. filopodia), p. 256
glioblast, p. 251
growth cone, p. 256
growth spurt, p. 263
imprinting, p. 271
masculinization, p. 273
netrin, p. 256
neural Darwinism, p. 256
neural plate, p. 249
neural stem cell, p. 251
neural tube, p. 249
neuroblast, p. 251
neurotrophic factor, p. 253
progenitor cell (precursor cell), p. 251
radial glial cell, p. 253
subventricular zone, p. 251
sudden infant death syndrome (SIDS), p. 277
testosterone, p. 273
tropic molecule, p. 256
How Do We Sense,
Perceive, and See the
World?
Katherine Streeter
Tyler Olson/Shutterstock
X = Fixation point
As a typical migraine scotoma develops, a person looking at the small
white × in the photograph at the far left would first see a small patch
of lines. This striped area continues growing outward, leaving an
opaque area (scotoma) where the stripes were, almost completely
blocking the visual field within 15 to 20 minutes. Normal vision returns
shortly thereafter.
Sensory Receptors
Sensory receptor neurons are specialized to transduce (convert) sensory
energy—light, for example—into neural activity. If we put flour into a
sieve and shake it, the more finely milled particles fall through the holes,
whereas the coarser particles and lumps do not. Sensory receptors are
designed to respond only to a narrow band of energy—analogous to
particles of certain sizes—within each modality’s energy spectrum. Each
sensory system’s receptors are specialized to filter a different form of
energy:
• For vision, the photoreceptors in the retina convert light energy into
chemical energy, which is in turn converted into action potentials.
• In the auditory system, air pressure waves are first converted into
mechanical energy, which activates the auditory receptors that produce
action potentials in auditory receptor neurons.
• In the somatosensory system, mechanical energy activates receptors
sensitive to touch, pressure, or pain. These somatosensory receptors in
turn generate action potentials in somatosensory receptor neurons.
• For taste and olfaction, various chemical molecules in the air or in food
fit themselves into receptors of various shapes to activate action
potentials in the respective receptor neurons.
Were our visual receptors somewhat different, we would be able to
see in the ultraviolet as well as the visible parts of the electromagnetic
spectrum, as honeybees and butterflies can. Receptors in the human ear
respond to a wide range of sound waves, but elephants and bats can hear
and produce sounds far below and above, respectively, the range humans
can hear. In fact, in comparison with those of other animals, human
sensory abilities are about average.
Even our pet dogs have “superhuman” powers: they can detect trace
odors; hear low-range sounds, as elephants do; and see in the dark. We
can hold up only our superior color vision. Thus, for each species and its
individual members, sensory systems filter the inputs to produce an
idiosyncratic representation of reality.
An animal’s perception of the world depends on its nervous
system’s complexity and organization.
Receptive Fields
Every sensory receptor organ and cell has a receptive field, a specific
part of the world to which it responds. If you fix your eyes on a point
directly in front of you, for example, what you see of the world is the
scope of your eyes’ receptive field. If you close one eye, the visual world
shrinks. What the remaining eye sees is the receptive field for that eye.
receptive field Sensory region that stimulates a receptor cell or
neuron.
Each photoreceptor cell in the eye points in a slightly different
direction and so has a unique receptive field. You can grasp the
conceptual utility of the receptive field by considering that the brain uses
information from each sensory receptor’s receptive field not only to
identify sensory information but also to contrast the information each
receptor field is providing.
Receptive fields not only sample sensory information but also help
locate events in space. Because adjacent receptive fields may overlap,
the contrast between their responses to events help us localize sensations.
This spatial dimension of sensory information produces cortical patterns
and maps that form each person’s sensory reality.
Our sensory system is organized to tell us both what is happening in
the world around us and what in the world we are doing ourselves. When
you move, you change the perceived properties of objects, and you sense
things that have little to do with the external world. When we run, visual
stimuli appear to stream by us, a stimulus configuration called optic
flow. When you move past a sound source, you hear an auditory flow, a
gradual shift in sound intensity that takes place because of your changing
location. Optic flow and auditory flow are useful for telling us how fast
we are going, whether we are going in a straight line or up or down, and
whether we or an object in the world is moving.
optic flow Streaming of visual stimuli that accompanies an
observer’s movement through space.
auditory flow Change heard as a person and a source of sound
move relative to one another.
Try this experiment. Slowly move your hand back and forth before
your eyes and gradually increase its speed. Eventually, your hand will
get a little blurry, because your eye movements are not quick enough to
follow its movement. Now keep your hand still and move your head
back and forth. The image of your hand remains clear. When receptors in
the inner ear inform your visual system that your head is moving, the
visual system compensates for the head movements, and you observe the
hand as a stationary image.
Tyler Olson/Shutterstock
Neural Relays
Inasmuch as receptors are common to each sensory system, all receptors
connect to the cortex through a sequence of three or four intervening
neurons. The visual and somatosensory systems have three relays, and
the auditory system has four. Information can be modified at various
stages in the relay, allowing the sensory system to mediate different
responses. Once again, this is very different from the sensory images on
a TV screen or from an audio system: information is being modified
repeatedly as the brain constructs the image or sound.
Neural relays also allow sensory systems to interact. There is no
straight-through point-to-point correspondence between one neural relay
and the next; rather, each successive relay recodes the activity. Sensory
neural relays are central to the hierarchy of motor responses in the brain.
Some of the three or four relays in each sensory system are in the
spinal cord; others, in the brainstem; and still others, in the neocortex. At
each level, the relay allows a sensory system to produce relevant actions
that define the hierarchy of our motor behavior. For example, the first
relay for pain receptors in the spinal cord is related to reflexes that
produce withdrawal of a body part from a painful stimulus. Thus, even
after section of the spinal cord from the brain, a limb still withdraws
from a painful stimulus.
Recall the principle from Section 2-6 : brain systems are organized
hierarchically and in parallel.
A dramatic effect of sensory interaction is the visual modification of
sound. If a person hears a speech syllable such as ba while observing
someone who is articulating ga, the listener hears not the actual sound ba
but a hybrid sound, da. The viewed lip movements modify the listener’s
auditory perception.
This interaction effect is potent: it highlights the fact that a speaker’s
facial gestures influence our perception of speech sounds. As Roy
Hamilton and his coworkers (2006) described, synchrony of gestures and
sounds is an important aspect of language acquisition. The difficulty of
learning a foreign language can relate to the difficulty of blending a
speaker’s articulation movements with the sounds the speaker produces.
Sensory Coding and Representation
After it has been transduced, all information from all sensory systems is
encoded by action potentials that travel along peripheral nerves in the
somatic nervous system until they enter the spinal cord or brain. From
there the action potentials travel on nerve tracts within the central
nervous system. Every bundle carries the same kind of signal. How do
action potentials encode different sensations? (How does vision differ
from touch?) How do they encode the features of particular sensations?
(How does purple differ from blue?)
Parts of these questions seem easy to answer; others pose a
fundamental challenge to neuroscience. The presence of a stimulus can
be encoded by an increase or decrease in a neuron’s discharge rate, and
the amount of increase or decrease can encode stimulus intensity. As
detailed in Section 9-4 , qualitative visual changes, such as from red to
green, can be encoded by activity in different neurons or even by
different levels of discharge in the same neuron. (For example, more
activity might signify redder and less activity, greener.)
Recall the principle from Section 2-6 : the nervous system works by
juxtaposing excitation and inhibition.
What is less clear is how we perceive such sensations as touch, sound,
and smell as different from one another. Part of the explanation is that
each sensation is processed in its own distinct cortical region. Also, we
learn through experience to distinguish them. Third, each sensory system
has a preferential link with certain reflexive movements, constituting a
separate wiring that helps keep each system distinct at all levels of neural
organization. For example, pain stimuli produce withdrawal responses,
and fine-touch and pressure stimuli produce approach responses.
Sensory Homunculus (plaster), English School, (20th century)/Natural History
Museum, London, UK/The Bridgeman Art Library
This curious figure reflects the topographic map in the sensorimotor
cortex. Disproportionately large areas control body parts we use to make
the most-skilled movements. See Sections 11-2 and 11-6 . Section 15-6
details synesthesia.
The distinctions among the sensory systems, however, are not always
clear: some people hear in color or identify smells by how the smells
sound to them. This mixing of the senses is called synesthesia. Anyone
who has shivered when hearing a piece of music or at the noise chalk or
fingernails can make on a blackboard has felt sound.
In most mammals, the neocortex represents the sensory field of each
modality—vision, hearing, touch, smell, or taste—as a spatially
organized neural representation of the external world. This topographic
map is a neural–spatial representation of the body or of the areas of the
sensory world perceived by a sensory organ. All mammals have at least
one primary cortical area for each sensory system. Additional areas are
usually referred to as secondary, because most of the information that
reaches these areas is relayed through the primary area. Each additional
representation is probably dedicated to encoding one specific aspect of
the sensory modality. For vision, different additional representational
areas may take part in perceiving color, movement, and form.
topographic map Spatially organized neural representation of the
external world.
Perception
Sensation is far more than the simple registration of physical stimuli
from the environment by the sensory organs. Compared with the richness
of actual sensation, our description of sensory neuroanatomy and
function is bound to seem sterile. Part of the reason for the disparity is
that our sensory impressions are affected by the context in which they
take place, by our emotional state, and by our past. All these factors
contribute to perception, the subjective experience of sensation—how
we interpret what we sense. Perception is more interesting to
neuropsychologists than is sensation.
sensation Registration by the sensory organs of physical stimuli
from the environment.
perception Subjective interpretation of sensations by the brain.
Clear proof that perception is more than sensation lies in the fact that
different people transform the same sensory stimulation into totally
different perceptions. The classic demonstration is an ambiguous image
such as the well-known Rubin’s vase shown in Figure 9-1 A . This
image may be perceived either as a vase or as two faces. If you fix your
eyes on the center of the picture, the two perceptions will alternate, even
though the sensory stimulation remains constant.
Similarly, the photograph of two cheetahs in Figure 9-1 B is
ambiguous. Which head goes with which cheetah? As with the Rubin’s
vase, the two perceptions may alternate. Such ambiguous images and
illusions demonstrate the workings of complex perceptual phenomena
and enlighten our insight into our cognitive processes.
© Gerry Lemmo
(A)
(B)
FIGURE 9-1 Perceptual Illusions (A) Edgar Rubin’s ambiguous
reversible image can be perceived as a vase or as two faces. (B) Likewise
ambiguous in the photo, each cheetah’s head can be perceived as
belonging to either cheetah’s body.
9-1 REVIEW
Nature of Sensation and Perception
Before you continue, check your understanding.
1 . ___________ are energy filters that transduce incoming physical
energy into neural activity.
2 . ___________ fields locate sensory events. Receptor ___________
determines sensitivity to sensory stimulation.
3 . We distinguish one sensory modality from another by its
___________.
4 . Sensation registers physical stimuli from the environment by the
sensory organs. Perception is the ___________.
5 . How is the anatomical organization similar for each sense?
Answers appear at the back of the book.
Anatomy
Our primary sensory experience is visual. Far more of the human brain is
dedicated to vision than to any other sense. Understanding the visual
system’s organization, then, is key to understanding human brain
function. To build this understanding, we begin by following the routes
visual information takes into and within the brain. This exercise is a bit
like traveling a road to discover where it goes. As you trace the route,
keep in mind the photograph in Focus 9-1 (page 284 ) and what the
different levels of the visual system are doing to capture that image in
the brain.
FIGURE 9-2 Central Focus This cross section through the retina (A) shows the
depression at the fovea—also shown in the scanning electron micrograph (B) —where
photoreceptors are packed most densely and where our vision is clearest.
Fovea
Try this experiment. Focus on the print at the left edge of this page. The
words will be clearly legible. Now, while holding your eyes still, try to
read the words on the right side of the page. It will be very difficult, even
impossible, even though you can see that words are there.
THE BASICS
Refractive Errors
FIGURE 9-3 Acuity Across the Visual Field Focus on the star
in the middle of the chart to demonstrate the relative sizes of letters legible
in the central field of vision compared with the peripheral field.
The lesson is that our vision is better in the center of the visual field
than at the margins, or periphery. Letters at the periphery must be much
larger than those in the center for us to see them as well. Figure 9-3
shows how much larger. The difference is due partly to the fact that
photoreceptors are more densely packed at the center of the retina, in a
region known as the fovea. Figure 9-2 shows that the retinal surface is
depressed at the fovea. This depression is formed because many optic
nerve fibers skirt the fovea to facilitate light access to its receptors.
fovea Central region of the retina specialized for high visual acuity;
its receptive fields are at the center of the eye’s visual field.
Blind Spot
Now try another experiment. Stand with your head over a tabletop and
hold a pencil in your hand. Close one eye. Stare at the edge of the table-
top nearest you. Now hold the pencil in a horizontal position and move it
along the edge of the table, with the eraser on the table. Beginning at a
point approximately below your nose, move the pencil slowly along the
table in the direction of the open eye.
When you have moved the pencil about 6 inches, the eraser will
vanish. You have found your blind spot, a small area of the retina also
known as the optic disc. This is the area where blood vessels enter and
exit the eye and where fibers leading from retinal neurons form the optic
nerve, which goes to the brain. There are therefore no photoreceptors in
this part of the retina. You can use Figure 9-4 to demonstrate the blind
spot in another way.
blind spot Retinal region where axons forming the optic nerve leave
the eye and blood vessels enter and leave; has no photoreceptors and
is thus said to be blind.
Fortunately, your visual system solves the blind spot problem: your
optic disc is in a different location in each eye. The optic disc is lateral to
the fovea in each eye, which means that it is left of the fovea in the left
eye and right of the fovea in the right eye. Because the two eyes’ visual
fields overlap, the right eye can see the left eye’s blind spot and vice
versa.
Using both eyes together, then, you can see the whole visual world.
For people blind in one eye, the sightless eye cannot compensate for the
blind spot in the functioning eye. Still, the visual system compensates for
the blind spot in several other ways, and so these people have no sense of
a hole in their field of vision.
The blind spot is of particular importance in neurology. It allows
neurologists to indirectly view the condition of the optic nerve while
providing a window on events in the brain. If intracranial pressure
increases, as occurs with a tumor or brain abscess (an infection), the
optic disc swells, leading to papilledema (swollen disc). The swelling
occurs in part because, like all neural tissue, the optic nerve is
surrounded by cerebrospinal fluid. Pressure inside the cranium can
displace CSF around the optic nerve, causing swelling at the optic disc.
Another cause of papilledema is inflammation of the optic nerve
itself, a condition known as optic neuritis. Whatever the cause, a person
with a swollen optic disc usually loses vision owing to pressure on the
optic nerve. If the swelling is due to optic neuritis, probably the most
common neurological visual disorder, the prognosis for recovery is good.
FIGURE 9-4 Find Your Blind Spot Hold this book 30 centimeters
(about 12 inches) away from your face. Shut your left eye and look at the
cross with your right eye. Slowly bring the page toward you until the red
spot in the center of the yellow disc disappears and the entire disc
appears yellow. The red spot is now in your blind spot. Your brain replaces
the area with the surrounding yellow to fill in the image. Turn the book
upside down to test your left eye.
Photoreceptors
The retina’s photoreceptor cells convert light energy first into chemical
energy and then into neural activity. Light striking a photoreceptor
triggers a series of chemical reactions that lead to a change in membrane
potential (electrical charge) that in turn leads to a change in the release of
neurotransmitter onto nearby neurons.
SPL/Science Source
Rods and cones, the two types of photoreceptors shown in Figure 9-5
, differ in many ways. They are structurally different. Rods are longer
than cones and cylindrical at one end, whereas cones have a tapered end.
Rods are more numerous than cones; are sensitive to low levels of
brightness (luminance), especially in dim light; and function mainly for
night vision (see Clinical Focus 9-2 , Visual Illuminance). Cones do not
respond to dim light, but they are highly responsive to bright light. Cones
mediate both color vision and our ability to see fine detail (visual acuity).
rod Photoreceptor specialized for functioning at low light levels.
cone Photoreceptor specialized for color and high visual acuity.
Rods and cones are unevenly distributed over the retina. The fovea
has only cones, but their density drops dramatically beyond the fovea.
For this reason, our vision is not so sharp at the edges of the visual field,
as demonstrated in Figure 9-3 .
A final difference between rods and cones is in their light-absorbing
pigments. All rods have the same pigment. Each cone has one of three
pigments. These four pigments, one in the rods and three in the cones,
form the basis for our vision.
As shown on the spectrum in Figure 9-6 , the three cone pigments
absorb light across a range of visible frequencies, but each is most
responsive to a small range of wavelengths—short (bluish light),
medium (greenish light), and long (reddish light). As you can see on the
background spectrum in Figure 9-6 , however, if you were to look at
lights with wavelengths of 419, 531, and 559 nanometers (nm), they
would not appear blue, green, and red but rather blue-green, yellow-
green, and orange. Remember, though, that you are looking at the lights
with all three of your cone types and that each cone pigment responds to
light across a range of frequencies, not just to its frequency of maximum
absorption.
A nanometer (nm) is one-billionth of a meter.
Both the presence of three cone receptor types and their relative
numbers and distribution across the retina contribute to our perception of
color. As Figure 9-7 shows, the three cone types are distributed more or
less randomly across the retina, making our ability to perceive different
colors fairly constant across the visual field. The numbers of red and
green cones are approximately equal, but blue cones are fewer. As a
result, we are not as sensitive to wavelengths in the blue part of the
visible spectrum as we are to red and green wavelengths.
FIGURE 9-6 Range and Peak Sensitivity Our color perception
corresponds to the summed activity of the three cone types: S cones, M
cones, and L cones (for short, medium, and long wavelengths). Each type
is most sensitive to a narrow range of the visible spectrum. Rods (white
curve) prefer a range of wavelengths centered on 496 nm but do not
contribute to our color perception. Rod activity is not summed with the
cones in the color vision system.
CLINICAL FOCUS 9-2
Visual Illuminance
The eye, like a camera, works correctly only when sufficient light
passes through the lens and is focused on the receptor surface—the
retina of the eye or the light-sensitive surface in the camera. Too
little light entering the eye or the camera produces a problem of
visual illuminance : it is hard to see any image at all.
Visual illuminance is typically a complication of aging eyes, It
cannot be cured by corrective lenses. As we age, the eye’s lens and
cornea allow less light through, so less light strikes the retina. Don
Kline (1994) estimated that between ages 20 and 40, people’s ability
to see in dim light drops by 50 percent; and by a further 50 percent
over every 20 additional years. As a result, seeing in dim light
becomes increasingly difficult, especially at night.
The only solution to compensate for visual illuminance is to
increase lighting. Night vision is especially problematic. Not
surprisingly, statistics show a marked drop in the number of people
driving at night in each successive decade after age 40.
FIGURE 9-7 Retinal Receptors The retinal mosaic of rods and three
cone types. This diagram represents the distribution near the fovea, where
cones outnumber rods. Red and green cones outnumber the blue.
Other species that have color vision similar to humans’ also have
three types of cones with three color pigments. Because of slight
variations in these pigments, the exact frequencies of maximum
absorption differ among species. For humans, the exact frequencies are
not identical with the numbers given earlier, which are an average across
mammals. They are actually 426 and 530 nm for the blue and green
cones, respectively, and 552 or 557 nm for the red cone. The two peak
sensitivity levels of red cones represent the two variants that humans
have evolved. The difference in these two red cones appears minuscule,
but it does make a functional difference in some females’ color
perception.
The gene for the red cone is carried on the X chromosome. Males
have only one X chromosome, so they have only one of these genes and
only one type of red cone. The situation is more complicated for females,
who possess two X chromosomes. Although most women have only one
type of red cone, those who have both are more sensitive than the rest of
us to color differences at the red end of the spectrum. We could say that
women who have both red cone types have a slightly rosier view of the
world: their color receptors construct a world with a richer range of red
experiences. But they also have to contend with seemingly peculiar color
coordination by others.
Retinal ganglion cells fall into two major categories, called M and P
cells in the primate retina. The designations derive from the distinctly
different cell populations in the visual thalamus to which these two
classes of RGCs send their axons. As shown in Figure 9-9 , one
population consists of magnocellular cells (hence M); the other consists
of parvocellular cells (hence P). The larger M cells receive their input
primarily from rods and so are sensitive to light but not to color. The
smaller P cells receive their input primarily from cones and so are
sensitive to color.
In Latin, magno means large and parvo means small.
magnocellular (M) cell Large visual system neuron sensitive to
moving stimuli.
parvocellular (P) cell Small visual system neuron sensitive to
differences in form and color.
M cells are found throughout the retina, including the periphery,
where we are sensitive to movement but not to color or fine detail. P
cells are found largely in the region of the fovea, where we are sensitive
to color and fine details. As we follow the ganglion cell axons into the
brain, you will see that these two categories of RGCs maintain their
distinctiveness throughout the visual pathways.
FIGURE 9-9 Visual Thalamus The optic nerves connect with the
lateral geniculate nucleus of the thalamus. The LGN has six layers: two
magnocellular layers, which receive input mainly from rods, and four
parvocellular layers, which receive input mainly from cones.
Visual Pathways
RGCs form the optic nerve, the road into the brain. This road forks off to
several places. The destinations of these branches give us clues to what
the brain is doing with visual input and how the brain constructs our
visual world.
Crossing the Optic Chiasm
We begin with the optic nerves, one exiting from each eye. Just before
entering the brain, the optic nerves partly cross, forming the optic
chiasm.
optic chiasm Junction of the optic nerves, one from each eye, at
which the axons from the nasal halves of the retinas cross to the
brain’s opposite side.
The optic chiasm gets its name from the shape of the Greek letter
chi (X) (pronounced ki ).
About half the fibers from each eye cross in such a way that the left
half of each optic nerve goes to the left side of the brain, and the right
half goes to the brain’s right side, as diagrammed in Figure 9-10 . The
medial path of each retina, the nasal retina, crosses to the opposite side.
The lateral path, the temporal retina, travels straight back on the same
side. Because light that falls on the right half of each retina actually
comes from the left side of the visual field, information from the left
visual field goes to the brain’s right hemisphere, and information from
the right visual field goes to the left hemisphere. Thus, half of each
retina’s visual field is represented on each side of the brain.
By connecting both eyes with both hemispheres, our visual system
represents the world seen through two eyes as a single perception.
FIGURE 9-10 Crossing the Optic Chiasm This dorsal view
shows the visual pathway from each eye to the primary visual cortex of
each hemisphere. Information from the right side of the visual field (blue)
moves from the two left halves of the retinas, ending in the left
hemisphere. Information from the left side of the visual field (red) hits the
right halves of the retinas and travels to the right side of the brain.
Bryan Kolb
FIGURE 9-12 Striate Cortex Area V1 is also called the striate cortex
because sections appear striated (striped) when stained with either a cell
body stain (left) or a myelin stain (right). The sections shown here come
from a rhesus monkey’s brain.
Having identified the temporal lobe and parietal lobe visual pathways,
researchers went searching for their possible functions. Why would
evolution produce two different destinations for these neural pathways?
Each route must produce visual knowledge for a different purpose.
David Milner and Mel Goodale (2006) proposed that these two
purposes are to identify a stimulus (the what function) and to control
movement to or away from the stimulus (the how function). This what –
how distinction came from an analysis of the routes visual information
takes when it leaves the striate cortex. Figure 9-13 shows the two
distinct visual pathways that originate in the striate cortex, one
progressing to the temporal lobe and the other to the parietal lobe. The
pathway to the temporal lobe is the ventral stream, whereas the
pathway to the parietal lobe is the dorsal stream.
ventral stream Visual processing pathway from V1 to the temporal
lobe for object identification and perceiving related movements.
dorsal stream Visual processing pathway from V1 to the parietal
lobe; guides movements relative to objects.
Both the geniculostriate and the tectopulvinar pathways contribute to
the dorsal and ventral streams. To understand how the two streams
function, we return to the details of how visual input from the eyes
contributes to them.
FIGURE 9-13 Visual Streaming Information travels from the occipital
visual areas to the parietal and temporal lobes, forming the dorsal (how )
and ventral (what ) streams, respectively.
Geniculostriate Pathway
The RGC fibers from the two eyes distribute their connections to the two
lateral geniculate nuclei (left and right) of the thalamus. At first glance,
this appears to be an unusual arrangement. As seen in Figure 9-10 , the
fibers from the left half of each retina go to the left LGN; those from the
right half of each retina go to the right LGN. But the fibers from each
eye do not go to exactly the same LGN location.
Each LGN has six layers, and the projections from the two eyes go to
different layers, as illustrated in anatomical context in Figure 9-9 and
diagrammed in Figure 9-14 . Layers 2, 3, and 5 receive fibers from the
ipsilateral eye (the eye on the same side), whereas layers 1, 4, and 6
receive fibers from the contralateral eye (the eye on the opposite side).
This arrangement provides both for combining the information from the
two eyes and for segregating the information from the P and M ganglion
cells.
Axons from the P cells go only to layers 3 through 6 (the
parvocellular layers). Axons from the M cells go only to layers 1 and 2
(the magnocellular layers). Because the P cells are responsive to color
and fine detail, LGN layers 3 through 6 must be processing information
about color and form. In contrast, the M cells mostly process information
about movement, so layers 1 and 2 must deal with movement.
Just as there are six layers in the thalamic LGN (numbered 1 through
6), there are also six layers in the striate cortex (numbered I through VI).
That there happen to be six layers in each location is an accident of
evolution found in all primate brains. Let us now see where these LGN
cells from the thalamus send their connections within the visual cortex.
Figure 2-22 maps layers I through VI in the primary motor and
sensory cortices.
Occipital Cortex
Our route down the visual pathways has led us from the retina all the
way back to the occipital lobe and into the parietal and temporal lobes.
Now we explore how visual information proceeds from the striate cortex
through the rest of the occipital lobe to the dorsal and ventral streams.
(A) Medial view of functional areas
FIGURE 9-16 Visual Regions of the Occipital Lobe
(A)
(B)
FIGURE 9-19 Vision Beyond the Occipital Cortex (A) In the
temporal lobe, the fusiform face area (FFA) processes faces, and the
parahippocampal place area (PPA) processes scenes. (B) In the parietal
lobe, the lateral intraparietal area (LIP) contributes to eye movements; the
anterior intraparietal area (AIP) is involved in visual control of grasping;
and the parietal reach region (PRR) participates in visually guided
reaching. (A) Republished with permission of Hasson, U., Y. Nir, I. Levy, G. Fuhrmann, and
R. Malach. Intersubject synchronization of cortical activity during natural vision. Science
303:1634–1640, 2004, permission conveyed through Copyright Clearance Center, Inc.
Seeing Shape
Imagine a microelectrode placed near a neuron somewhere in the visual
pathway from retina to cortex. The microelectrode is recording changes
in the neuron’s firing rate. This cell occasionally fires spontaneously,
producing action potentials with each discharge. Assume that the neuron
discharges, on average, once every 0.08 second. Each action potential is
brief, on the order of 1 millisecond.
Figure 4-6 diagrams how microelectrodes work.
If we plot action potentials spanning 1 s, we see only spikes in the
record because the action potentials are so brief. Figure 9-25 A is a
single cell recording of 12 spikes in the span of 1 second. If the firing
rate of this cell increases, we see more spikes (Figure 9-25 B). If the
firing rate decreases, we see fewer spikes (Figure 9-25C ). The increase
in firing is the result of neuronal excitation, whereas the decrease
indicates inhibition. Excitation and inhibition, of course, are the principal
information transfer mechanisms in the nervous system.
Now suppose we present a stimulus to the neuron by illuminating its
receptive field in the retina, perhaps by shining a light on a blank screen
within the cell’s visual field. We might place before the eye a straight
line positioned at a 45° angle. The cell could respond to this stimulus
either by increasing or decreasing its firing rate. In either case, we would
conclude that the cell is generating information about the line.
The same cell could show excitation to one stimulus, inhibition to
another stimulus, and no reaction at all to a third. The cell could be
excited by lines oriented 45° to the left and inhibited by lines oriented
45° to the right. Similarly, the cell could be excited by stimulation in one
part of its receptive field (such as the center) and inhibited by stimulation
in another part (such as the periphery).
Finally, we might find that the cell’s response to a particular stimulus
is selective. Such a cell would be telling us about the importance of the
stimulus to the animal. For instance, the cell might be excited when a
stimulus is presented with food but inhibited when the same stimulus is
presented alone. In each case, the cell is selectively sensitive to
characteristics in the visual world.
Neurons at each level of the visual system have distinctly different
characteristics and functions. Our goal is not to look at each neuron type
but rather to consider generally how some typical neurons at each level
differ from one another in their contributions to processing shape. We
focus on neurons in three areas: the ganglion cell layer of the retina, the
primary visual cortex, and the temporal cortex.
(A) Baseline (12 per second)
(B) Excitation
(C) Inhibition
Processing in RGCs
Neurons in the retina do not detect shape, because their receptive fields
are minuscule dots. Each retinal ganglion cell responds only to the
presence or absence of light in its receptive field, not to shape. Shape is
constructed by processes in the cortex from the information that those
ganglion cells pass on about events in their receptive fields.
The receptive field of a ganglion cell has a concentric circle
arrangement, as illustrated in Figure 9-26 A . A spot of light falling in
the receptive field’s central circle excites some of these cells, whereas a
spot of light falling in the receptive field’s surround (periphery) inhibits
the cell. A spot of light falling across the entire receptive field weakly
increases the cell’s firing rate.
This type of neuron is called an on-center cell. Other RGCs, called
off-center cells, have the opposite arrangement, with light in the center of
the receptive field inhibiting, light in the surround exciting, and light
across the entire field producing weak inhibition (Figure 9-26 B). The
on–off arrangement of RGC receptive fields makes these cells especially
responsive to tiny spots of light.
This description of ganglion cell receptive fields might mislead you
into thinking that they form a mosaic of discrete little circles on the
retina. In fact, neighboring retinal ganglion cells receive their inputs
from an overlapping set of photoreceptors. As a result, their receptive
fields overlap, as illustrated in Figure 9-27 . In this way, a small spot of
light shining on the retina is likely to produce activity in both on-center
and off-center RGCs.
(A) On-center cell’s receptive field
How can on-center and off-center ganglion cells tell the brain
anything about shape? The answer is that a ganglion cell tells the brain
about the amount of light hitting a certain spot on the retina compared
with the average amount of light falling on the surrounding retinal
region. This comparison is known as luminance contrast. Luminance is
the amount of visible light reflected to the eye from a surface, and
contrast is the difference in luminance between adjacent parts of that
surface. The photograph in Focus 9-1 (p. 284) shows us two clear
differences in luminance contrast. On the left, the woman’s pink top
contrasts sharply with her black slacks, but the sleeve on the right
contrasts far less with the background. It does not appear as bright.
luminance contrast Amount of light an object reflects relative to its
surroundings.
To understand how luminance contrast tells the brain about shape,
consider the hypothetical population of on-center ganglion cells
represented in Figure 9-28 . Their receptive fields are distributed across
the retinal image of a light–dark edge. Some of the ganglion cells’
receptive fields are in the dark area, others are in the light area, and still
others’ fields straddle the edge of the light.
The ganglion cells with receptive fields in the dark or light areas are
least affected because they receive either no stimulation or stimulation of
both the excitatory and the inhibitory regions of their receptive fields.
The ganglion cells most affected by the stimulus are those lying along
the edge. Ganglion cell B is inhibited because the light falls mostly on its
inhibitory surround, and ganglion cell D is excited because its entire
excitatory center is stimulated but only part of its inhibitory surround is.
FIGURE 9-27 Overlapping Receptive Fields
(A)
Fritz Goro/Time & Life Pictures/Getty Images
FIGURE 9-36 Color Mixing (A) Subtractive color mixing absorbs light
waves that we see as red, blue, or yellow. When all visible wavelengths
are absorbed, we see black. (B) Additive color mixing reflects light waves
that we see as red, blue, and green. When all visible wavelengths are
reflected, we see white.
Seeing Color
Scientists have long wondered why—and how—we see a world so rich
in color. One hypothesis on the why is that color vision evolved first in
the great apes, specifically in apes that eat fruit. Chimpanzees and
humans are members of this family. Over their evolution, both species
have faced plentiful competition for ripe fruits—from other animals,
insects, and each other. Scientists suspect that color vision gave the great
apes a competitive evolutionary advantage.
Section 1-4 recounts several ideas on how the primate lifestyle,
including diet, encouraged the evolution of complex nervous
systems.
An explanation of color vision has its roots in the Renaissance 600
years ago in Italy. Painters of the time discovered that they could obtain
the entire range of colors in the visual world by mixing only three colors
of paint (red, blue, and yellow). This is the process of subtractive color
mixing shown in Figure 9-36 A .
We now know that such trichromatic color mixing is a property of the
cones in the retina. Subtractive color mixing works by removing light
from the mix. This is why matte black surfaces reflect no light: the
darker the color, the less light it contains.
Conversely, additive color mixing increases light to make color
(Figure 9-36 B). The lighter the color, the more light it contains, which is
why a white surface reflects the entire visible spectrum. Unlike those of
paint, the primary colors of light are red, blue, and green. Light of
different wavelengths stimulates the three cone receptor types in
different ways. It is the ratio of activity of these three receptor types that
forms our impressions of colors.
Trichromatic Theory
According to the trichromatic theory, the color we see—say, blue at
short 400-nanometer wavelengths, green at medium 500 nm, and red at
long 600 nm—is determined by the relative responses of the
corresponding cone types (see Figure 9-6 ). If all three types are equally
active, we see white.
trichromatic theory Explanation of color vision based on the
coding of three primary colors: red, green, and blue.
Trichromatic theory predicts that if we lack one cone receptor type,
we cannot process as many colors as we could with all three. This is
exactly what happens when a person is born with only two cone types.
The colors this person cannot perceive depend on which receptor type is
missing, as illustrated in Research Focus 9-3 , Color-Deficient Vision.
RESEARCH FOCUS 9-3
Color-Deficient Vision
Most people’s retinas contain three cone types. These people have
trichromatic vision. But some people are missing one or more cone
types and are thus often mistakenly said to be color-blind.
Mistakenly, because people who have two types of cones still can
distinguish lots of colors, just not as many as people with three cones
can.
To have no color vision at all, one would have to have only one
type of photoreceptor, rods. This is a rare occurrence, but we do have
a friend who has no concept of color. It has led to a lifetime of
practical jokes, because others (especially his wife) must choose
clothing colors that coordinate for him to wear.
The complete lack of red cones leads to a condition called
protanopia; the lack of green cones is deuteranopia; the lack of blue
cones is tritanopia. The frequency of each condition is about 1
percent in men and 0.01 percent in women. Having only a partial
lack of one cone type, most commonly the green cone, also is
possible. This condition afflicts about 5 percent of men and 0.4
percent of women.
The illustration provides a simple approximation, compared with
trichromats (left), of what people with protanopia (center) or
deuteranopia (right) see. They still see plenty of color, but that color
is largely different from the color trichromats see. Many domestic
animals (dogs, cats, and horses among them) have deuteranopia,
which actually gives them an advantage in seeing objects that appear
camouflaged to trichro-mats. In fact, the military often use humans
with deuteranopia to help see through camouflage.
Image as viewed by a trichromat observer
Protanopia: image as viewed by an observer lacking red cones
Dr. Terrace L. Waggoner/www.ColorVisionTesting.com
Deuteranopia: image as viewed by an observer lacking green cones
The mere presence of cones in an animal’s retina does not mean that
the animal has color vision. It simply means that the animal has
photoreceptors particularly sensitive to light. Many animals lack color
vision as we know it, but the only animal with eyes known to have no
cones at all is a fish, the skate.
Opponent Processes
Although the beginning of color perception in the cones follows the
trichromatic model, succeeding levels of color processing use a different
strategy. Try staring first at the red and blue box in Figure 9-37 for about
30 seconds then at the white box next to it. When you shift your gaze to
the white surface, you will see an afterimage in the colors opposite to red
and blue—green and yellow. Conversely, if you stare at a green and
yellow box and then shift to white, you will see a red and blue
afterimage. Such afterimages lead to the sense that there are actually four
basic colors (red, green, yellow, and blue).
FIGURE 9-37 Demonstrating Opposing Color Pairs Stare
at the rectangle on the left for about 30 seconds. Then stare at the white
box on the right. You will see an afterimage of green on the red side and of
yellow on the blue side.
(A)
(B)
(C)
(D)
(E)
FIGURE 9-38 Opponent-Color Contrast Response (A) A
red–green color-sensitive RGC responds weakly to white light on its center
and surround because red and green cones absorb white light to similar
extents, so their inputs cancel out. (B) The cell responds strongly to a spot
of red light in its center as well as to red’s paired wavelength, green, in the
surround. (C) It is strongly inhibited by a small spot of green in its center.
(D) The RGC responds very strongly to simultaneous illumination of the
center with red and the surround with green. (E) It is completely inhibited
by the simultaneous illumination of the center with green and the surround
with red.
J. W.’s copy
FIGURE 9-41 Injury to the Ventral Stream J. W. survived a severe heart
attack while exercising and later, anoxia. Subsequently, he was unable to recognize the
simple line drawings on the left and copied them poorly (right).
The rightmost column in Figure 9-43 shows that, when asked to pick
up the same irregularly shaped objects that D. F. could grasp normally,
R. V. often failed to place her fingers on the appropriate grasp points,
even though she could distinguish the objects easily. In other words,
although R. V.’s perception of an object’s features was normal for the
task of describing that object, her perception was not normal for the task
of visually guiding her hand to reach for the object.
To summarize, people with damage to the parietal cortex in the dorsal
visual stream can see perfectly well, yet they cannot accurately guide
their movements on the basis of visual information. Guidance of
movement is the dorsal stream’s function. In contrast, people with
damage to the ventral stream cannot perceive objects, because object
perception is a ventral stream function. Yet these same people can guide
their movements to objects on the basis of visual information.
The first kind of patient, like R. V., has an intact ventral stream that
analyzes the visual characteristics of objects. The second kind of patient,
like D. F., has an intact dorsal stream that visually directs movements.
Comparing the two types of cases enables us to infer the visual functions
of the dorsal and ventral streams.
9-5 REVIEW
The Visual Brain in Action
Before you continue, check your understanding.
1 . Cuts completely through the optic tract, LGN, or V1 produce
___________.
2 . Small lesions of V1 produce small blind spots called ___________.
3 . Destruction of the retina or the optic nerve of one eye produces
___________.
4 . The effect of severe deficits in visually guided reaching is called
___________.
5 . Contrast the effects of injury to the dorsal stream and the effects of
injury to the ventral stream.
Answers appear at the back of the book.
(A) The ventral stream begins in V1 and flows through V2 to V3 and V4,
then into the temporal visual areas. (B) The dorsal stream begins in V1
and flows through V5 and V3A to the posterior parietal visual areas.
Double-headed arrows show information flow between the two streams—
between recognition and action, perception and behavior.
KEY TERMS
auditory flow, p. 286
blind spot, p. 293
blob, p. 299
color constancy, p. 313
cone, p. 293
cortical column, p. 299
dorsal stream, p. 297
extrastriate (secondary visual) cortex (V2–V5), p. 299
facial agnosia, p. 301
fovea, p. 293
geniculostriate system, p. 297
homonymous hemianopia, p. 315
luminance contrast, p. 306
magnocellular (M) cell, p. 295
ocular dominance column, p. 309
opponent process, p. 313
optic ataxia, p. 316
optic chiasm, p. 295
optic flow, p. 286
parvocellular (P) cell, p. 295
perception, p. 289
photoreceptor, p. 289
primary visual cortex (V1), p. 299
quadrantanopia, p. 315
receptive field, p. 286
retina, p. 289
retinal ganglion cell (RGC), p. 295
retinohypothalamic tract, p. 297
rod, p. 293
scotoma, p. 315
sensation, p. 289
striate cortex, p. 297
tectopulvinar system, p. 297
topographic map, p. 289
trichromatic theory, p. 311
ventral stream, p. 297
visual field, p. 301
visual-form agnosia, p. 315
10
How Do We Hear,
Speak, and Make
Music?
Katherine Streeter
Language and music are universal among humans. The oral language of
every known culture follows similar basic structural rules, and people in
all cultures make and enjoy music. Music and language allow us both to
organize and to interact socially. Like music, language probably
improves parenting. People who can communicate their intentions to one
another and to their children presumably are better parents.
Language is independent of making or perceiving sounds, as sign
language demonstrates. In this chapter, however, language refers to
speech.
Humans’ capacities for language and music are linked conceptually
because both are based on sound. Understanding how and why we
engage in speech and music is this chapter’s goal. We first examine the
physical energy that we perceive as sound, then how the human ear and
nervous system detect and interpret sound. We next examine the
complementary neuroanatomy of human language and music processing.
Finally, we investigate how two other species, birds and bats, interpret
and utilize auditory stimuli.
10-1 Sound Waves: Stimulus for Audition
What we experience as sound is the brain’s construct, as is what we see.
Without a brain, sound and sight do not exist. When you strike a tuning
fork, the energy of its vibrating prongs displaces adjacent air molecules.
Figure 10-1 shows how, as one prong moves to the left, air molecules to
the left compress (grow more dense) and air molecules to the right
become more rarefied (grow less dense). The opposite happens when the
prong moves to the right. The undulating energy generated by this
displacement of molecules causes compression waves of changing air
pressure to emanate from the fork. These sound waves move through
compressible media—air, water, ground—but not through the vacuum of
outer space.
sound wave Mechanical displacement of molecules caused by
changing pressure that possesses the physical properties of
frequency, amplitude, and complexity. Also compression wave.
The top graph in Figure 10-2 represents waves of changing air
pressure emanating from a tuning fork by plotting air molecule density
against time at a single point. The bottom graph shows how the energy
from the right-hand prong of the tuning fork moves to make the air
pressure changes associated with a single cycle. A cycle is one complete
peak and valley on the graph—the change from one maximum or
minimum air pressure level of the sound wave to the next maximum or
minimum level, respectively.
The nervous system produces movement within a perceptual world
the brain constructs.
Section 9-4 explains how we see shapes and colors.
If you hit it harder, the frequency remains 264 hertz, but you also transfer
more energy into the vibrating prong, increasing its amplitude.
The fork now moves farther left and right but at the same frequency.
Increased air molecule compression intensifies the energy in a sound
wave, which amps the sound—makes it louder. Differences in amplitude
are graphed by increasing the height of a sound wave, as shown in the
middle panel of Figure 10-3 .
Sound wave amplitude is usually measured in decibels (dB), the
strength of a sound relative to the threshold of human hearing as a
standard, pegged at 0 decibels ( Figure 10-5 ). Typical speech sounds, for
example, measure about 40 decibels. Sounds that register more than about
70 dB we perceive as loud; those of less than about 20 dB we perceive as
soft, or quiet.
decibel (dB) Measure of the relative physical intensity of sounds.
The human nervous system evolved to be sensitive to soft sounds and
so is actually blown away by extremely loud ones. People regularly
damage their hearing through exposure to very loud sounds (such as rifle
fire at close range) or even by prolonged exposure to sounds that are only
relatively loud (such as at a live concert). Prolonged exposure to sounds
louder than 100 decibels is likely to damage our hearing.
Rock bands, among others, routinely play music that registers higher
than 120 decibels and sometimes as high as 135 decibels. Drake-Lee
(1992) found that rock musicians had a significant loss of sensitivity to
sound waves, especially at about 6000 hertz. After a typical 90-minute
concert, this loss was temporarily far worse—as much as a 40-fold
increase in sound pressure was needed to reach a musician’s hearing
threshold. But rock concerts are not the only music venue that can
damage hearing. Teie (1998) reports that symphony orchestras also
produce dangerously high sound levels and that hearing loss is common
among symphony musicians. Similarly, prolonged listening through
headphones or earbuds to music played loudly on personal music players
is responsible for significant hearing loss in many young people (Daniel,
2007).
Perception of Sound
Visualize what happens when you toss a pebble into a pond. Waves of
water emanate from the point where the pebble enters the water. These
waves produce no audible sound. But if your skin were able to convert the
water wave energy (sensation) into neural activity that stimulated your
auditory system, you would hear the waves when you placed your hand in
the rippling water (perception). When you removed your hand, the sound
would stop.
The pebble hitting the water is much like a tree falling to the ground,
and the waves that emanate from the pebble’s entry point are like the air
pressure waves that emanate from the place where the tree strikes the
ground. The frequency of the waves determines the pitch of the sound
heard by the brain, whereas the height (amplitude) of the waves
determines the sound’s loudness.
Our sensitivity to sound waves is extraordinary. At the threshold of
human hearing, we can detect the displacement of air molecules of about
10 picometers. We are rarely in an environment where we can detect such
a small air pressure change: there is usually too much background noise.
A quiet, rural setting is probably as close as we ever get to an
environment suitable for testing the acuteness of our hearing. The next
time you visit the countryside, take note of the sounds you can hear. If
there is no sound competition, you can often hear a single car engine
miles away.
In addition to detecting minute changes in air pressure, the auditory
system is also adept at simultaneously perceiving different sounds. As
you read this chapter, you can differentiate all sorts of sounds around you
—traffic on the street, people talking next door, your air conditioner
humming, footsteps in the hall. As you listen to music, you detect the
sounds of different instruments and voices.
You can perceive more than one sound simultaneously because each
frequency of change in air pressure (each different sound wave)
stimulates different neurons in your auditory system. Sound perception is
only the beginning of your auditory experience. Your brain interprets
sounds to obtain information about events in your environment, and it
analyzes a sound’s meaning. Your use of sound to communicate with
other people through both language and music clearly illustrate these
processes.
1 picometer = one-trillionth of a meter
Properties of Language
Experience listening to a particular language helps the brain to analyze
rapid speech, which is one reason people who are speaking languages
unfamiliar to you often seem to be talking incredibly fast. Your brain does
not know where the foreign words end and begin, so they seem to run
together in a rapid-fire stream.
A unique characteristic of our perception of speech sounds is our
tendency to hear variations of a sound as if they were identical, even
though the sound varies considerably from one context to another. For
instance, the English letter d is pronounced differently in the words deep,
deck, and duke, yet a listener perceives the pronunciations to be the same
d sound.
The auditory system must therefore have a mechanism for categorizing
sounds as being the same despite small differences in pronunciation.
Experience must affect this mechanism, because different languages
categorize speech sounds differently. A major obstacle to mastering a
foreign language after age 10 is the difficulty of learning which sound
categories are treated as equivalent.
Auditory constancy is reminiscent of the visual system’s capacity for
object constancy; see Section 9-4 .
Properties of Music
As with other sounds, the subjective properties that people perceive in
musical sounds differ from one another. One subjective property is
loudness, the magnitude of the sound as judged by a person. Loudness is
related to the amplitude of a sound wave measured in decibels, but
loudness is also subjective. What is very loud music for one person may
be only moderately loud for another, whereas music that seems soft to one
listener may not seem at all soft to someone else. Your perception of
loudness also changes with context. After you’ve slowed down from
driving fast on a highway, for example, your car’s music system seems
louder. The reduction in road noise alters your perception of the music’s
loudness.
Another subjective property of musical sounds is pitch, the position of
each tone on a musical scale as judged by the listener. Although pitch is
clearly related to sound wave frequency, there is more to it than that.
Consider the note middle C as played on a piano. This note can be
described as a pattern of sound frequencies, as is the clarinet note in
Figure 10-6 .
Like the note played on the piano, any musical note is defined by its
fundamental frequency—the lowest frequency of the sound wave pattern,
or the rate at which the overall pattern repeats. For middle C, the
fundamental frequency is 264 Hertz, and the sound waves for notes C, E,
and G, as measured by a spectrograph, are shown in Figure 10-7 . Notice
that by convention sound wave spectrographs are measured in kilohertz
(kHz), or thousands of hertz. Thus, if we look at the fundamental
frequency for middle C, it is the first large wave on the left, at 0.264
kilohertz. The fundamental frequencies for E and G are 0.330 and 0.392
kilohertz, respectively.
An important feature of the human brain’s analysis of music is that
middle C is perceived as being the same note whether it is played on a
piano or on a guitar, even though the sounds of these instruments differ
widely. The right temporal lobe extracts pitch from sound, whether the
sound is speech or music. In speech, pitch contributes to the perceived
melodic tone of a voice, or prosody.
prosody Melodic tone of the speaking voice.
A final property of musical sound is quality, or timbre, the perceived
characteristics that distinguish a particular sound from all others of
similar pitch and loudness. We can easily distinguish the timbre of a
violin from that of a trombone, even if both instruments are playing the
same note at the same loudness. The quality of their sounds differs.
System
To understand how the nervous system analyzes sound waves, we begin
by tracing the pathway sound energy takes to and through the brain. The
ear collects sound waves from the surrounding air and converts their
mechanical energy to electrochemical neural energy, which begins a long
route through the brainstem to the auditory cortex.
Before we can trace the journey from ear to cortex, we must ask what
the auditory system is designed to do. Because sound waves have the
properties frequency, amplitude, and complexity, we can predict that the
auditory system is structured to decode these properties. Most animals
can tell where a sound comes from, so some mechanism must locate
sound waves in space. Finally, many animals, including humans, not
only analyze sounds for meaning but also make sounds. Because the
sounds they produce are often the same as the ones they hear, we can
infer that the neural systems for sound production and analysis must be
closely related.
In humans, the evolution of sound-processing systems for both
language and music was accompanied by enhancement of specialized
cortical regions, especially in the temporal lobes. In fact, a major
difference between the human and the monkey cortex is a marked
expansion of auditory areas in humans.
Monkey
Human
These receptor cells and the cells that support them are collectively
called the organ of Corti, shown in detail in Figure 10-8 .
When sound waves vibrate the eardrum, the vibrations are transmitted
to the ossicles. The leverlike action of the ossicles amplifies the
vibrations and conveys them to the membrane that covers the cochlea’s
oval window. As Figure 10-8 shows, the cochlea coils around itself and
looks a bit like a snail shell. Inside its bony exterior, the cochlea is
hollow, as the cross-sectional drawing reveals.
The hollow cochlear compartments are filled with lymphatic fluid,
and floating in its midst is the thin basilar membrane. Embedded in a
part of the basilar membrane are outer and inner hair cells. At the tip of
each hair cell are several filaments called cilia, and the cilia of the outer
hair cells are embedded in the overlying tectorial membrane. The inner
hair cells loosely contact this tectorial membrane.
basilar membrane Receptor surface in the cochlea that transduces
sound waves into neural activity.
hair cell Specialized neurons in the cochlea tipped by cilia; when
stimulated by waves in the cochlear fluid, the cilia bend and
generate graded potentials in inner hair cells, the auditory receptor
cells.
Pressure from the stirrup on the oval window makes the cochlear fluid
move because a second membranous window in the cochlea (the round
window ) bulges outward as the stirrup presses inward on the oval
window. In a chain reaction, the waves traveling through the cochlear
fluid bend the basilar and tectorial membranes, and the bending
membranes stimulate the cilia at the tips of the outer and inner hair cells.
Cochlea actually means snail shell in Latin.
Transducing Sound Waves into Neural Impulses
How does the conversion of sound waves into neural activity code the
various properties of sound that we perceive? In the late 1800s, the
German physiologist Hermann von Helmholtz proposed that sound
waves of different frequencies cause different parts of the basilar
membrane to resonate. Von Helmholtz was partly correct. Actually, all
parts of the basilar membrane bend in response to incoming waves of
any frequency. The key is where the peak displacement takes place
(Figure 10-9 ).
Auditory Receptors
Two kinds of hair cells transform sound waves into neural activity.
Figure 10-8 (bottom left) shows the anatomy of the inner hair cells;
Figure 10-10 illustrates how sound waves stimulate them. A young
person’s cochlea has about 12,000 outer and 3500 inner hair cells. The
numbers fall off with age. Only the inner hair cells act as auditory
receptors, and their numbers are small considering how many different
sounds we can hear. As diagrammed in Figure 10-10 , both outer and
inner hair cells are anchored in the basilar membrane. The tips of the
cilia of outer hair cells are attached to the overlying tectorial membrane,
but the cilia of the inner hair cells only loosely touch that membrane.
Nevertheless, the movement of the basilar and tectorial membranes
causes the cochlear fluid to flow past the cilia of the inner hair cells,
bending them back and forth.
FIGURE 10-10 Transducing Waves into Neural Activity
Movement of the basilar membrane produces a shearing force in the
cochlear fluid that bends the cilia, leading to the opening or closing of
calcium channels in the outer hair cells. An influx of calcium ions leads the
inner hair cells to release neurotransmitter, stimulating increased action
potentials in auditory neurons.
Animals with intact outer hair cells but no inner hair cells are
effectively deaf. That is, they can perceive only very loud, low-frequency
sounds via the somatosensory system. You may have experienced this
feeling when a sub-woofer or a passing truck caused vibrations in your
chest. Inner hair cells can be destroyed by prolonged exposure to intense
sound pressure waves, infections, diseases, or certain chemicals and
drugs. Inner hair cells do not regenerate; thus once the inner hair cells
have died hearing loss is permanent.
Outer hair cells function by sharpening the cochlea’s resolving power,
contracting or relaxing and thereby changing tectorial membrane
stiffness. That’s right: the outer hair cells have a motor function. While
we typically think of sensory input preceding motor output, in fact,
motor systems can influence sensory input. The pupil contracts or dilates
to change the amount of light that falls on the retina, and the outer hair
cells contract or relax to alter the physical stimulus detected by the inner
hair cells.
How this outer hair cell function is controlled is puzzling. What
stimulates these cells to contract or relax? The answer seems to be that
through connections with axons in the auditory nerve, the outer hair cells
send a message to the brainstem auditory areas and receive a reply that
causes the cells to alter tension on the tectorial membrane. In this way,
the brain helps the hair cells to construct an auditory world. The outer
cells are also part of a mechanism that modulates auditory nerve firing,
especially in response to intense sound pressure waves, and thus offers
some protection against their damaging effects.
A final question remains: How does movement of the inner hair cell
cilia alter neural activity? The neurons of the auditory nerve have a
spontaneous baseline rate of firing action potentials, and this rate is
changed by the amount of neurotransmitter the hair cells release. It turns
out that movement of the cilia changes the inner hair cell’s polarization
and its rate of neurotransmitter release. Inner hair cells continuously leak
calcium, and this leakage causes a small but steady amount of
neurotransmitter release into the synapse. Movement of the cilia in one
direction results in depolarization: calcium channels open and release
more neurotransmitter onto the dendrites of the cells that form the
auditory nerve, generating more nerve impulses. Movement of the cilia
in the other direction hyper-polarizes the cell membrane and transmitter
release decreases, thus decreasing activity in auditory neurons.
Inner hair cells are amazingly sensitive to the movement of their cilia.
A movement sufficient to allow sound wave detection is only about 0.3
nm, about the diameter of a large atom! Such sensitivity helps to explain
why our hearing is so incredibly sensitive. Clinical Focus 10-2 ,
Otoacoustic Emissions, describes a consequence of cochlear function.
Section 4-2 reviews phases of the action potential and its
propagation as a nerve impulse.
RESEARCH FOCUS 10-2
Otoacoustic Emissions
While the ear is exquisitely designed to amplify and convert sound
waves into action potentials, it is unique among the sensory organs.
The ear also produces the physical stimulus it is designed to detect!
A healthy cochlea produces sound waves called otoacoustic
emissions.
otoacoustic emissions Spontaneous or evoked sound waves
produced within the ear by the cochlea and escape from the ear.
The cochlea acts as an amplifier. The outer hair cells amplify
sound waves, providing an energy source that enhances cochlear
sensitivity and frequency selectivity. Not all the energy the cochlea
generates is dissipated within it. Some escapes toward the middle
ear, which works efficiently in both directions, thus setting the
eardrum in motion. The eardrum then acts as a loudspeaker, radiating
sound waves—the otoacoustic emissions—out of the ear.
Sensitive microphones placed in the external ear canal can detect
both types of otoacoustic emissions, spontaneous and evoked. As the
name implies, spontaneous otoacoustic emissions occur without
external stimulation. Evoked otoacoustic emissions, generated in
response to sound waves, are important because evoked emissions
are useful for assessing hearing impairments.
A simple, noninvasive test can detect and evaluate evoked
otoacoustic emissions in newborns and children who are too young
to take conventional hearing tests, as well as in people of any age. A
small speaker and microphone are inserted into the ear. The speaker
emits a click sound, and the microphone detects the resulting evoked
emission without damaging the delicate workings of the inner ear.
Missing or abnormal evoked emissions predict a hearing deficit.
Many wealthy countries now sponsor universal programs to test the
hearing of all newborn babies using otoacoustic emissions.
Otoacoustic emissions serve a useful purpose, but even so, they
play no direct role in hearing. They are considered an
epiphenomenon— a secondary phenomenon that occurs in parallel
with or above (epi ) a primary phenomenon.
Pathways to the Auditory Cortex
Inner hair cells in the organ of Corti synapse with neighboring bipolar
cells, the axons that form the auditory (cochlear) nerve. The auditory
nerve in turn forms part of the eighth cranial nerve, the auditory
vestibular nerve that governs hearing and balance. Whereas ganglion
cells in the eye receive inputs from many receptor cells, bipolar cells in
the ear receive input from but a single inner hair cell receptor.
Cochlear-nerve axons enter the brainstem at the level of the medulla
and synapse in the cochlear nucleus, which has ventral and dorsal
subdivisions. Two nearby structures in the hindbrain (brainstem), the
superior olive (a nucleus in the olivary complex) and the trapezoid body,
receive connections from the cochlear nucleus, as charted in Figure 10-
11 . Projections from the cochlear nucleus connect with cells on the same
side of the brain as well as with cells on the opposite side. This
arrangement mixes the inputs from the two ears to form a single sound
perception.
Both the cochlear nucleus and the superior olive send projections to
the inferior colliculus in the dorsal midbrain. Two distinct pathways
emerge from the inferior colliculus, coursing to the medial geniculate
nucleus in the thalamus. The ventral region of the medial geniculate
nucleus projects to the primary auditory cortex (area A1), whereas the
dorsal region projects to the auditory cortical regions adjacent to area
A1.
medial geniculate nucleus Major thalamic region concerned with
audition.
primary auditory cortex (area A1) Asymmetrical structures within
Heschl’s gyrus in the temporal lobes; receive input from the ventral
region of the medial geniculate nucleus.
Analogous to the two distinct visual pathways—the ventral stream for
object recognition and the dorsal stream for visual control of movement
—a similar distinction exists in the auditory cortex (Romanski et al.,
1999). Just as we can identify objects by their sound characteristics, we
can direct our movements by the sound we hear. The role of sound in
guiding movement is less familiar to sight-dominated people than it is to
the blind. Nevertheless, the ability exists in us all. Imagine waking up in
the dark and reaching to pick up a ringing telephone or to turn off an
alarm clock. Your hand automatically forms the appropriate shape in
response to just the sound you have heard. That sound is guiding your
movements much as a visual image guides them.
Relatively little is known about the what–how auditory pathways in
the cortex. One appears to continue through the temporal lobe, much like
the ventral visual pathway, and plays a role in identifying auditory
stimuli. A second auditory pathway apparently goes to the posterior
parietal region, where it forms a dorsal route for the auditory control of
movement. It appears as well that auditory information can gain access
to visual cortex, as illustrated in Research Focus 10-3 , Seeing with
Sound.
Figure 2-27 lists and locates the cranial nerves, and in its caption is a
mnemonic for remembering them in order.
Figure 9-13 maps the visual pathways through the cortex.
Auditory Cortex
In humans, the primary auditory cortex (A1) lies within Heschl’s gyrus,
surrounded by secondary cortical areas (A2), as shown in Figure 10-12
A . The secondary cortex lying behind Heschl’s gyrus is called the
planum temporale (Latin for temporal plane ).
In right-handed people, the planum temporale is larger on the left than
it is on the right side of the brain, whereas Heschl’s gyrus is larger on the
right side than on the left. The cortex of the left planum forms a speech
zone known as Wernicke’s area (the posterior speech zone), whereas
the cortex of the larger right-hemisphere Heschl’s gyrus has a special
role in analyzing music.
Wernicke’s area Secondary auditory cortex (planum temporale)
lying behind Heschl’s gyrus at the rear of the left temporal lobe;
regulates language comprehension. Also posterior speech zone.
These hemispheric differences mean that the auditory cortex is
anatomically and functionally asymmetrical. Although cerebral
asymmetry is not unique to the auditory system, it is most obvious here
because auditory analysis of speech takes place only in the left
hemisphere of right-handed people. About 70 percent of left-handed
people have the same anatomical asymmetries as right-handers, an
indication that speech organization is not strictly related to hand
preference. Language, including speech and other functions such as
reading and writing, also is asymmetrical, although the right hemisphere
also contributes to these broader functions.
The remaining 30 percent of left-handers fall into two distinct groups.
The organization in about half of these people is opposite that of right-
handers. The other half has some idiosyncratic bilateral speech
representation. That is, about 15 percent of all left-handed people have
some speech functions in one hemisphere and some in the other
hemisphere.
(A) Auditory cortex
(B) Insula
FIGURE 10-12 Human Auditory Cortex (A) The left hemisphere,
showing the lateral fissure retracted to reveal the primary auditory cortex
buried within Heschl’s gyrus; and adjacent secondary auditory regions. In
cross section, the posterior speech zone (Wernicke’s area) is larger on the
left, and Heschl’s gyrus is larger in the right hemisphere. (B) Frontal view
showing the extent of the multifunctional insular cortex buried in the lateral
fissure.
RESEARCH FOCUS 10-3
Seeing with Sound
As detailed in Section 10-5 , echolocation, the ability to use sound to
locate objects in space, has been extensively studied in species such
as bats and dolphins. But it was reported more than 50 years ago that
some blind people also echolocate.
More recently, anecdotal reports have surfaced of blind people
who navigate around the world using clicks made with their tongues
and mouths then listening to the returning echoes. Videos, such as
the 45-minute documentary at https://www.youtube.com/watch?
v=AiBeLoB6CKE , show congenitally blind people riding a bicycle
down a street with silent obstacles such as parked cars. But how do
they do this, and what part of the brain enables it?
Behavioral studies of blind people reveal that echolocators make
short, spectrally broad clicks by moving the tongue backward and
downward from the roof of the mouth directly behind the teeth.
Skilled echo-locators can identify properties of objects that include
position, distance, size, shape, and texture (Teng & Whitney, 2011).
Thaler and colleagues (2011) investigated the neural basis of this
ability using fMRI. They studied two blind echolocation experts and
compared brain activity for sounds that contain both clicks and
returning echoes with brain activity for control sounds that did not
contain the echoes. The participants use echolocation to localize
objects in the environment, but more important, they also perceive
the object’s shape, motion—even its identity!
When the blind participants listened to recordings of their
echolocation clicks and echoes compared to silence, both the
auditory cortex and the primary visual cortex showed activity.
Sighted controls showed activation only in the auditory cortex.
Remarkably, when the investigators compared the controls’ brain
activity to recordings that contained echoes versus those that did not,
the auditory activity disappeared. By contrast, as illustrated in the
figure, the blind echolocators showed activity only in the visual
cortex when sounds with and without echoes were compared.
Sighted controls (findings not shown) showed no activity in either
the visual or auditory cortex in this comparison.
These results suggest that blind echolocation experts process
click–echo information using brain regions typically devoted to
vision. Thaler and his colleagues propose that the primary visual
cortex is performing a spatial computation using information from
the auditory cortex.
Future research may determine how this process works. More
immediately, the study suggests that echolocation could be taught to
blind and visually impaired people to provide them increased
independence in their daily life.
Seeing with Sound When cortical activation for sound with and
without echoes is imaged in a blind echolocator, only the visual cortex
shows activation (left) relative to the auditory cortex (right). Research
from Thaler, L., Arnott, S. R., & Goodale, M. A. (2011), Neural
correlates of natural human echolocation in early and late blind
echolocation experts. PLoS ONE, 6, (5)e20162.
doi:10.1371/journal.pone.0020162
Hearing Pitch
Recall that perception of pitch corresponds to the frequency (repetition
rate) of sound waves measured in hertz (cycles per second). Hair cells in
the cochlea code frequency as a function of their location on the basilar
membrane. In this tonotopic representation, hair cell cilia at the base of
the cochlea are maximally displaced by high-frequency waves, which we
hear as high-pitched sounds; those at the apex are displaced the most by
low-frequency waves, which we hear as low-pitched sounds. Because
each bipolar-cell axon that forms the cochlear nerve is connected to only
one inner hair cell, the bipolar cells convey information about the spot on
the basilar membrane, from apex to base, that is being stimulated.
tonotopic representation In audition, structural organization for
processing of sound waves from lower to higher frequencies.
Recordings from single fibers in the cochlear nerve reveal that
although each axon transmits information about only a small part of the
auditory spectrum, each cell does respond to a range of sound wave
frequencies—if the wave is sufficiently loud. That is, each hair cell is
maximally responsive to a particular frequency and also responds to
nearby frequencies, but the sound wave’s amplitude must be greater
(louder) for those nearby frequencies to excite the receptor’s membrane
potential.
We can plot this range of hair cell responses to different frequencies at
different amplitudes as a tuning curve. As graphed in Figure 10-13 ,
each hair cell receptor is maximally sensitive to a particular wavelength
but still responds somewhat to nearby wavelengths.
Tonotopic literally means of a tone place.
A hair cell’s frequency range parallels a photoreceptor’s response to
light wavelengths. See Figure 9-6 .
FIGURE 10-13 Tuning Curves Graphs plotted by the sound wave
frequency and amplitude energy required to increase the firing rate of two
axons in the cochlear nerve. The lowest point on each tuning curve is the
frequency to which that hair cell is most sensitive. The curve at left is
centered on a frequency of 1000 Hz, the midrange of human hearing; the
curve at right is centered on a frequency of 10,000 Hz, in the high range.
Detecting Loudness
The simplest way for cochlear (bipolar) cells to indicate sound wave
intensity is to fire at a higher rate when amplitude is greater, which is
exactly what happens. More intense air pressure changes produce more
intense basilar membrane vibrations and therefore greater shearing of the
cilia. Increased shearing leads to more neurotransmitter released onto
bipolar cells. As a result, the bipolar axons fire more frequently, telling
the auditory system that the sound is getting louder.
Detecting Location
Psychologist Albert Bregman devised a visual analogy to describe what
the auditory system is doing when it detects sound location:
Imagine a game played at the side of a lake. Two small channels are
dug, side by side, leading away from the lake, and the lake water is
allowed to fill them up. Partway up each channel, a cork floats,
moving up and down with the waves. You stand with your back to the
lake and are allowed to look only at the two floating corks. Then you
are asked questions about what is happening on the lake. Are there
two motorboats on the lake or only one? Is the nearer one going from
left to right or right to left? Is the wind blowing? Did something heavy
fall into the water? You must answer these questions just by looking at
the two corks. This would seem to be an impossible task. Yet consider
an exactly analogous problem. As you sit in a room, a lake of air
surrounds you. Running off this lake, into your head, are two small
channels – your ear canals. At the end of each is a membrane (the ear
drum) that acts like the floating corks in the channels running off the
lake, moving in and out with the sound waves that hit it. Just as the
game at the lakeside offered no information about the happenings on
the lake except for the movements of the corks, the sound-producing
events in the room can be known by your brain only through the
vibrations of your two eardrums. (Bregman, 2005, p. 35)
We estimate the location of a sound both by taking cues derived from
one ear and by comparing cues received at both ears. The fact that each
cochlear nerve synapses on both sides of the brain provides mechanisms
for locating a sound source. In one mechanism, neurons in the brainstem
compute the difference in a sound wave’s arrival time at each ear—the
interaural time difference (ITD). Differences in arrival time need not be
large to be detected. If two sounds presented through earphones are
separated in time by as little as 10 microseconds, the listener will
perceive that a single sound came from the leading ear.
This computation of left-ear–right-ear arrival times is carried out in
the medial part of the superior olivary complex (see Figure 10-11 ).
Because these hindbrain cells receive inputs from each ear, they can
compare exactly when the signal from each ear reaches them.
Figure 10-16 shows how sound waves originating on the left reach
the left ear slightly before they reach the right ear. As the sound source
moves from the side of the head toward the middle, a person has greater
and greater difficulty locating it: the ITD becomes smaller and smaller
until there is no difference at all. When we detect no difference, we infer
that the sound is either directly in front of us or directly behind us. To
locate it, we turn our head, making the sound waves strike one ear
sooner. We have a similar problem distinguishing between sounds
directly above and below us. Again, we solve the problem by tilting our
head, thus causing the sound waves to strike one ear before the other.
Another mechanism used by the auditory system to detect the source
of a sound is the sound’s relative loudness on the left and the right—the
interaural intensity difference (IID). The head acts as an obstacle to
higher-frequency sound waves, which do not easily bend around the
head. As a result, higher-frequency waves on one side of the head are
louder than on the other. The lateral part of the superior olive and the
trapezoid body detect this difference. Again, sound waves coming from
directly in front or behind or from directly above or below require the
same solution: tilting or turning the head.
Head tilting and turning take time, which is important for animals,
such as owls, that hunt using sound. Owls need to know the location of a
sound simultaneously in at least two directions—right or left and above
or below. Owls, like humans, can orient in the horizontal plane to sound
waves by using ITD. Additionally, the owl’s ears have evolved to detect
the relative loudness of sound waves in the vertical plane. As
diagrammed in Figure 10-17 , owls’ ears are slightly displaced
vertically. This solution allows owls to hunt entirely by sound in the
dark. Bad news for mice.
FIGURE 10-16 Locating a Sound Compression waves originating on
the left side of the body reach the left ear slightly before the right. The ITD
is small, but the auditory system can discriminate it and fuse the dual
stimuli so that we perceive a single, clear sound coming from the left.
Horizontal orienting is azimuth detection; vertical orienting is elevation
detection.
Processing Language
An estimated 5000 to 7000 human languages are spoken in the world
today, and probably many more have gone extinct in past millennia.
Researchers have wondered whether the brain has a single system for
understanding and producing any language, regardless of its structure, or
whether disparate languages, such as English and Japanese, are
processed differently. To answer this question, it helps to analyze
languages to determine just how fundamentally similar they are, despite
their obvious differences.
Uniformity of Language Structure
Foreign languages often seem impossibly complex to those who do not
speak them. Their sounds alone may seem odd and difficult to make. If
you are a native English speaker, for instance, Asian languages, such as
Japanese, probably sound especially melodic and almost without obvious
consonants to you, whereas European languages, such as German or
Dutch, may sound heavily guttural.
Even within such related languages as Spanish, Italian, and French,
marked differences can make learning one of them challenging, even if
the student already knows another. Yet as real as all these linguistic
differences may be, they are superficial. The similarities among human
languages, although not immediately apparent, are actually far more
fundamental than their differences.
Noam Chomsky (1965) is usually credited as the first linguist to stress
similarities over differences in human language structure. In a series of
books and papers written over the past half-century, Chomsky has made
a sweeping claim, as have researchers such as Steven Pinker (1997) more
recently. They argue that all languages have common structural
characteristics stemming from a genetically determined constraint, and
these common characteristics form the basis of universal grammar
theory. Humans, apparently, have a built-in capacity for learning and
using language, just as we have for walking upright.
Chomsky was greeted with deep skepticism when he first proposed
this idea in the 1960s, but it has since become clear that the capacity for
human language is indeed genetic. An obvious piece of evidence:
language is universal in human populations. All people everywhere use
language.
A language’s complexity is unrelated to its culture’s technological
complexity. The languages of technologically unsophisticated peoples
are every bit as complex and elegant as the languages of postindustrial
cultures. Nor is the English of Shakespeare’s time inferior or superior to
today’s English; it is just different.
Another piece of evidence that Chomsky adherents cite for the genetic
basis of human language is that humans learn language early in life and
seemingly without effort. By about 12 months of age, children
everywhere have started to speak words. By 18 months, they are
combining words, and by age 3 years, they have a rich language
capability.
Perhaps the most amazing thing about language development is that
children are not formally taught the structure of their language, just as
they are not taught to crawl or walk. They just do it. As toddlers, they are
not painstakingly instructed in the rules of grammar. In fact, their early
errors—sentences such as “I goed to the zoo”—are seldom even
corrected by adults. Yet children master language rapidly. They also
acquire language through a series of stages that are remarkably similar
across cultures. Indeed, the process of language acquisition plays an
important role in Chomsky’s theory of its innateness—which is not to
say that language development is not influenced by experience.
At the most basic level, children learn the language or languages that
they hear spoken. In an English household, they learn English; in a
Japanese home, Japanese. They also pick up the language structure—the
vocabulary and grammar—of the people around them, even though that
structure can vary from one speaker to another. Children go through a
sensitive period for language acquisition, probably from about 1 to 6
years of age. If they are not exposed to language throughout this critical
period, their language skills are severely compromised. If children learn
two languages simultaneously, the two share the same part of Broca’s
area. In fact, their neural representations overlap (Kim et al., 1997).
Both its universality and natural acquisition favor the theory for a
genetic basis of human language. A third piece of evidence is the many
basic structural elements common to all languages. Granted, every
language has its own particular grammatical rules specifying exactly
how various parts of speech are positioned in a sentence (syntax), how
words are inflected to convey different meanings, and so forth. But an
overarching set of rules also applies to all human languages, and the first
rule is that there are rules.
For instance, all languages employ parts of speech that we call
subjects, verbs, and direct objects. Consider the sentence Jane ate the
apple. Jane is the subject, ate is the verb, and apple is the direct object.
Syntax is not specified by any universal rule but rather is a characteristic
of the particular language. In English, syntactical order (usually) is
subject, verb, object; in Japanese, the order is subject, object, verb; in
Gaelic, the order is verb, subject, object. Nonetheless, all have both
syntax and grammar.
The existence of these two structural pillars in all human languages is
seen in the phenomenon of creolization —the development of a new
language from what was formerly a rudimentary language, or pidgin.
Creolization took place in the seventeenth century in the Americas when
slave traders and colonial plantation owners brought together, from
various parts of West Africa, people who lacked a common language.
The newly enslaved needed to communicate, and they quickly created a
pidgin based on whatever language the plantation owners spoke—
English, French, Spanish, or Portuguese.
A 1-year-old’s 5- to 10-word vocabulary doubles in the next 6
months and by 36 months mushrooms to 1000 words; see Section 8-
3.
Focus 8-3 describes how cortical activation differs for second
languages learned later in life and Section 15-6 , research on
bilingualism and intelligence.
The pidgin had a crude syntax (word order) but lacked a real
grammatical structure. The children of the slaves who invented this
pidgin grew up with caretakers who spoke only pidgin to them. Yet
within a generation, these children had developed their own creole, a
language complete with a genuine syntax and grammar.
Clearly, the pidgin invented of necessity by adults was not a learnable
language for children. Their innate biology shaped a new language
similar in basic structure to all other human languages. All creolized
languages seem to evolve in a similar way, even though the base
languages are unrelated. This phenomenon can happen only because
there is an innate biological component to language development.
Localizing Language in the Brain
Finding a universal basic language structure set researchers on the search
for an innate brain system that underlies language use. By the late 1800s,
it had become clear that language functions were at least partly localized
—not just within the left hemisphere but to specific areas there. Clues
that led to this conclusion began to emerge early in the nineteenth
century, when neurologists observed patients with frontal lobe injuries
who had language difficulties.
Then, in 1861, the French physician Paul Broca confirmed that certain
language functions are localized in the left hemisphere. Broca concluded,
on the basis of several postmortem examinations, that language is
localized in the left frontal lobe, in a region just anterior to the central
fissure. A person with damage in this area is unable to speak despite both
an intact vocal apparatus and normal language comprehension. The
confirmation of Broca’s area was significant because it triggered the
idea that the left and right hemispheres might have different functions.
Broca’s area Anterior left hemisphere speech area that functions
with the motor cortex to produce movements needed for speaking.
Other neurologists of the time believed that Broca’s area might be
only one of several left-hemisphere regions that control language. In
particular, they suspected a relation between hearing and speech. Proving
this suspicion correct, Karl Wernicke later described patients who had
difficulty comprehending language after injury to the posterior region of
the left temporal lobe, identified as Wernicke’s area in Figure 10-18 .
Section 7-1 links Broca’s observations to his contributions to
neuropsychology.
(A)
(B)
FIGURE 10-18 Neurology of Language (A) In Wernicke’s model of
speech recognition, stored sound images are matched to spoken words in
the left posterior temporal cortex, shown in yellow. (B) Speech is produced
through the connection that the arcuate fasciculus makes between
Wernicke’s area and Broca’s area.
(A)
(B)
FIGURE 10-19 Mapping Cortical Functions (A) Neurosurgery for
eligible epilepsy patients who failed to respond to antiseizure medications.
The patient is fully conscious, lying on his right side, and kept comfortable
with local anesthesia. Wilder Penfield stimulates discrete cortical areas in
the patient’s exposed left hemisphere. In the background, a neurologist
monitors an EEG recorded from each stimulated area to help identify the
epileptogenic focus. The anesthetist (seated) observes the patient’s
responses to the cortical stimulation. (B) A drawing overlies a photograph
of the patient’s exposed brain. The numbered tickets identify points
Penfield stimulated to map the cortex in this patient’s brain. At points 26,
27, and 28, a stimulating electrode disrupted speech. Point 26 presumably
is in Broca’s area, 27 is the motor cortex facial control area, and 28 is in
Wernicke’s area.
Processing Music
Although Penfield did not study the effect of brain stimulation on
musical analysis, many researchers study musical processing in brain-
damaged patients. Clinical Focus 10-5 , Cerebral Aneurysms, describes
one such case. Collectively, the results of these studies confirm that
musical processing is in fact largely a right-hemisphere specialization,
just as language processing is largely a left-hemisphere one.
Localizing Music in the Brain
A famous patient, the French composer Maurice Ravel (1875–1937),
provides an excellent example of right-hemisphere predominance for
music processing. Boléro is perhaps Ravel’s best-known work. At the
peak of his career, Ravel had a left-hemisphere stroke and developed
aphasia. Yet many of his musical skills remained intact post-stroke
because they were localized to the right hemisphere. He could still
recognize melodies, pick up tiny mistakes in music he heard, and even
judge the tuning of pianos. His music perception was largely intact.
Skills that had to do with producing music, however, were among
those destroyed. Ravel could no longer recognize written music, play the
piano, or compose. This dissociation of music perception and music
production may parallel the dissociation of speech comprehension and
speech production in language. Apparently, the left hemisphere plays at
least some role in certain aspects of music processing, especially those
that have to do with making music.
CLINICAL Focus 10-5
Cerebral Aneurysms
C. N. was a 35-year-old nurse described by Isabelle Peretz and her
colleagues (1994). In December 1986, C. N. suddenly developed
severe neck pain and headache. A neurological examination revealed
an aneurysm in the middle cerebral artery on the right side of her
brain.
An aneurysm is a bulge in a blood vessel wall caused by
weakening of the tissue, much like the bulge that appears in a bicycle
tire at a weakened spot. Aneurysms in a cerebral artery are
dangerous: if they burst, severe bleeding and consequent brain
damage result.
In February 1987, C. N.’s aneurysm was surgically repaired, and
she appeared to have few adverse effects. Postoperative brain
imaging revealed, however, that a new aneurysm had formed in the
same location but in the middle cerebral artery on the opposite side
of the brain. This second aneurysm was repaired 2 weeks later.
After her surgery, C. N. had temporary difficulty finding the right
word when she spoke, but more important, her perception of music
was deranged. She could no longer sing, nor could she recognize
familiar tunes. In fact, singers sounded to her as if they were talking
instead of singing. But C. N. could still dance to music.
A brain scan revealed damage along the lateral fissure in both
temporal lobes. The damage did not include the primary auditory
cortex, nor did it include any part of the posterior speech zone. For
these reasons, C. N. could still recognize nonmusical sound patterns
and showed no evidence of language disturbance. This finding
reinforces the hypothesis that nonmusical sounds and speech sounds
are analyzed in parts of the brain separate from those that process
music.
To find out more about how the brain carries out the perceptual side
of music processing, Zatorre and his colleagues (1994) conducted PET
studies. When participants listened simply to bursts of noise, Heschl’s
gyrus became activated (Figure 10-22 A ), but perception of melody
triggers major activation in the right-hemisphere auditory cortex lying in
front of Heschl’s gyrus (Figure 10-22 B), as well as minor activation in
the same left-hemisphere region (not shown).
In another test, participants listened to the same melodies. The
investigators asked them to indicate whether the pitch of the second note
was higher or lower than that of the first note. During this task, which
necessitates short-term memory of what was just heard, blood flow in the
right frontal lobe increased (Figure 10-22C ). As with language, then, the
frontal lobe plays a role in auditory analysis when short-term memory is
required. People with enhanced or impaired musical abilities show
differences in frontal lobe organization, as demonstrated in Research
Focus 10-6 , The Brain’s Music System.
As noted earlier, the capacity for language is innate. Sandra Trehub
and her colleagues (1999) showed that music may be innate as well, as
we hypothesized at the beginning of the chapter.
(A) Listening to bursts of noise
Trehub found that infants show learning preferences for musical scales
versus random notes. Like adults, children are sensitive to musical
errors, presumably because they are biased for perceiving regularity in
rhythms. Thus, it appears that the brain is prepared at birth for hearing
both music and language, and presumably it selectively attends to these
auditory signals.
The brain may be tuned prenatally to the language it will hear at
birth; see Focus 7-1.
At: https://www.youtube.com/watch?v=eNpoVeLfMKg , watch as
Parkinson patients step to the beat of music to improve their gait
length and walking speed.
More on music as therapy in Focus 5-2 and the dance class for
Parkinson patients pictured on page 160 . Sections 16-2 and 16-3
revisit music therapy.
Music as Therapy
The power of music to engage the brain has led to its use as a therapeutic
tool for brain dys-functions. The best evidence of its effectiveness lies in
studies of motor disorders such as stroke and Parkinson disease
(Johansson, 2012). Listening to rhythm activates the motor and premotor
cortex and can improve gait and arm training after stroke. Musical
experience reportedly also enhances the ability to discriminate speech
sounds and to distinguish speech from background noise in patients with
aphasia.
Music therapy also appears to be a useful complement to more
traditional therapies, especially when there are problems with mood,
such as in depression or brain injury. This may prove important in the
treatment of stroke and traumatic brain injury, with which depression is a
common complication in recovery. Music therapy also has positive
effects following major surgery, both in adults and children, by reducing
both their pain perception and the amount of pain medication they use
(Sunitha Suresh et al., 2015). With all these applications, perhaps
researchers will decide to use noninvasive imaging to determine which
brain areas music therapy recruits.
10-4 REVIEW
Nonhuman Species
Sound has survival value. You will appreciate this if you’ve ever
narrowly escaped becoming an accident statistic by crossing a busy
intersection on foot while listening to a music player or talking on a cell
phone. Audition is as important a sense to many animals as vision is to
humans. Many animals also communicate with other members of their
species by using sound, as humans do.
Here we consider just two types of auditory communication in
nonhumans: birdsong and echolocation. Each provides a model for
understanding different aspects of brain–behavior relations in which the
auditory system plays a role.
Birdsong
Of about 8500 living bird species, about half are considered songbirds.
Birdsong has many functions, including attracting mates (usually
employed by males), demarcating territories, and announcing location or
even just presence. Although all birds of the same species have a similar
song, the song’s details vary markedly from region to region, much as
dialects of the same human language vary.
Parallels Between Birdsong and Language
Figure 10-23 shows sound wave spectrograms for the songs of male
white-crowned sparrows that live in three localities near San Francisco.
These songs differ markedly from region to region. The differences stem
from the fact that song development in young birds is influenced not just
by genes but also by early experience and learning. Young birds that
have a good tutor can acquire more elaborate songs than can other
members of their species (Marler, 1991).
These gene–experience interactions are the result of epigenetic
mechanisms. For example, brain areas that control singing in adult song
sparrows show altered gene expression in spring as the breeding—and
singing—season begins (Thompson et al., 2012). Such studies have not
yet targeted young birds, but it is safe to predict that researchers will find
parallel changes.
Birdsong and human language have broad similarities beyond
regional variation. Both appear to be innate yet are sculpted by
experience. Both are diverse and can vary in complexity. Humans seem
to have a basic template for language that is programmed into the brain,
and experience adds a variety of specific structural forms to this
template.
If a young bird is not exposed to song until it is a juvenile and then
listens to recordings of birdsongs of various species, the young bird
shows a general preference for its own species’ song. This preference
must mean that each bird has a species-specific song template in the
brain. As for language, experience modifies the details of this birdsong
template.
Echolocation in Bats
Next to rodents, bats are the most numerous mammalian order. The two
general groups, or suborders, are the smaller bats (Microchiroptera) and
the larger fruit-eating and flower-visiting bats (Megachiroptera),
sometimes called flying foxes. Each uses a form of echolocation. Using
their wings to make clicking sounds, Megachiroptera can detect large
surfaces and orient in complete darkness. This rudimentary form of
echolocation may be the forerunner of the highly evolved throat
(laryngeal) system used in a sophisticated way by the Microchiroptera to
navigate, hunt, and communicate using sound waves (Boonman et al.,
2014).
Most of the 680 species of Microchiroptera feed on insects. Others
live on blood (vampire bats), and some catch frogs, lizards, fishes, birds,
and small mammals. These bats’ auditory system is highly specialized to
use echolocation not only to locate targets in the dark but also to analyze
the targets’ features as well as environmental features in general.
Through echolocation, these bats identify prey, navigate through the
leaves of trees, and locate suitable landing surfaces. Echolocation in the
Microchiroptera works rather like sonar. The bat larynx emits bursts of
sound waves at ultrasonic frequencies. The waves bounce off objects and
return to the bat’s ears, allowing the animal to identify what is in the
surrounding environment. The bat, in other words, navigates by the
echoes it hears, differentiating among the various characteristics of the
echoes.
echolocation Identifying and locating an object by bouncing sound
waves off it.
Moving objects (such as insects) give off a moving echo, smooth
objects a different echo from rough objects, and so on. A key component
of the bats’ echolocation system is analysis of differences in echo return
times. Close objects return echoes sooner than more distant objects do,
and the textures of various objects’ surfaces impose minute differences
in return times.
A bat’s cries are short (ranging from 0.3 to 200 milliseconds) and
high frequency (12,000 to 200,000 Hz, charted in Figure 10-4 ). Most of
this range lies at too high a frequency for the human ear to detect.
Different bat species produce sound waves of different frequencies that
depend on the animal’s ecology. Bats that catch prey in the open use
different frequencies from those used by bats that catch insects in foliage
and from those used by bats that hunt prey on the ground.
The echolocation abilities of bats are phenomenal, as shown in Figure
10-25 . Bats in the wild can be trained to catch small food particles
thrown up into the air in the dark. These echolocating skills make the bat
a most efficient hunter. The little brown bat, for instance, can capture
tiny flying insects, such as mosquitoes, at the remarkable rate of two per
second.
Researchers have considerable interest in the neural mechanisms of
bat echolocation. Each species emits sound waves in a relatively narrow
frequency range, and a bat’s auditory pathway has cells specifically
tuned to echoes in its species’ frequency range. For example, the
mustached bat sends out sound waves ranging from 60,000 to 62,000 Hz,
and its auditory system has a cochlear fovea (a maximally sensitive area
in the organ of Corti) that corresponds to that frequency range.
In this way, more neurons are dedicated to the frequency range used
for echolocation than to any other range of frequencies. Analogously, our
visual system dedicates more neurons to the retina’s fovea, the area
responsible for our most detailed vision. In the cortex of the bat’s brain,
several distinct areas process complex echoic inputs. One area computes
the distance of given targets from the animal, for instance, whereas
another area computes the velocity of a moving target. This neural
system makes the bat exquisitely adapted for nighttime navigation.
Dolphins use an auditory strategy similar to bats, but in water. Focus
10-2 profiles human echolocators.
Microchiroptera
Megachiroptera
Auditory Communication in
Nonhuman Species
Before you continue, check your understanding.
1 . Song development in young birds is influenced both by genes and by
early experience and learning, interactions indicative of _________.
2 . In many bird species the control of song in the brain is lateralized to
the _________ hemisphere.
3 . Bats use _________ to locate prey in the dark. This system is much
like the _________ that ships use to locate underwater objects.
4 . What does the presence of dialects in birdsong in the same species
demonstrate?
Answers appear at the back of the book.
11
Neuroprosthetics
Most of us seamlessly control the approximately 650 muscles that
move our bodies. But if the motor neurons that control those muscles
no longer connect to them, as happens in amyotrophic lateral sclerosis
(ALS, or Lou Gehrig disease), movement, and eventually breathing,
become impossible.
This happened to Scott Mackler, a neuroscientist and marathon
runner, in his late 30s. Dependent on a respirator to breathe, he
developed locked-in syndrome: Mackler lost virtually all ability to
communicate.
ALS has no cure, and death often occurs within 5 years of diagnosis.
Yet Scott Mackler beat the odds: he survived for 17 years before he
died in 2013 at age 55. Mackler beat locked-in syndrome too, by
learning to translate his mental activity into movement. He returned to
work at the University of Pennsylvania, stayed in touch with family and
friends, and even gave an interview to CBS’s 60 Minutes in 2008.
Mackler was a pioneer in brain–computer interface (BCI)
technology. BCIs employ the brain’s electrical signals to direct
computer-controlled devices. BCIs are one area of neuroprosthetics,
development of computer-assisted devices to replace lost biological
function.
neuroprosthetics Field that develops computer-assisted devices to
replace lost biological function.
A computer –brain interface (CBI) employs electrical signals from a
computer to instruct the brain. Cochlear implants that deliver sound-
related signals to the inner ear to allow hearing are CBIs. Brain–
computer–brain interfaces (BCBIs) combine the BCI and CBI
approaches. BCBIs enable the brain to command robotic devices that
provide it sensory feedback.
In 2008, Mackler’s BCI took up to 20 s to execute a single
command. Today’s devices enhance processing speed and increase
signal precision by using electrodes placed directly adjacent to brain
cells in arrays that interface with thousands of cells. Experimental
approaches use optogenetics, incorporating light-sensitive channels into
cortical motor and sensory neurons. Light signals are faster than
electrical signals and produce less tissue damage.
BCBIs command robotic hands to grasp objects while tactile
receptors on the robot are delivering touch and other sensory
information to the user. BCBIs in development also control exoskeletal
devices that reach and walk and return touch, body position, and
balance information to guide movement. In essence, BCBIs use
variations in CNS activity to generate signals. It is unlikely, however,
that in doing so they employ the signaling codes normally used by the
brain in producing behavior (Daly & Huggins, 2015).
© Lifehand2, PatriziaTocci
Brain–computer–brain interfaces such as the robotic limb shown here enable
the brain to command robotic devices that provide it sensory feedback.
Results
Cerebral Palsy
E. S. had a cold and infection when he was about 6 months old.
Subsequently, he had great difficulty coordinating his movements.
As he grew up, his hands and legs were almost useless and his
speech was extremely difficult to understand. E. S. was considered
intellectually disabled and spent most of his childhood in a custodial
school.
When E. S. was 13 years old, the school bought a computer. One
teacher attempted to teach E. S. to use it by pushing the keys with a
pencil held in his mouth. Within a few weeks, the teacher realized
that E. S. was extremely intelligent and could communicate and
complete school assignments on the computer. He eventually
received a motorized wheelchair that he could control with finger
movements of his right hand.
Assisted by the computer and the wheelchair, E. S. soon became
almost self-sufficient and eventually attended college, where he
achieved excellent grades and became a student leader. On
graduation with a degree in psychology, he became a social worker
and worked with children with cerebral palsy.
William Little, an English physician, first noticed in 1853 that
difficult or abnormal births could lead to later motor difficulties in
children. The disorder that Little described was cerebral palsy (also
called Little disease), a group of disorders that result from brain
damage acquired perinatally (at or near birth). Cerebral palsy is
common worldwide, with an incidence estimated to be 1.5 in every
1000 births. Among surviving babies who weigh less than 2.5
kilograms at birth, the incidence is much higher—about 10 in 1000.
The most common causes of cerebral palsy are birth injury,
especially due to anoxia, a lack of oxygen, and genetic defects.
Anoxia may result from a defect in the placenta, the organ that
allows oxygen and nutrients to pass from mother to child in utero, or
it may be caused by a tangled umbilical cord that reduces the oxygen
supply to the infant during birth. Other causes include infections,
hydrocephalus, seizures, and prematurity. All may produce a defect
in the immature brain before, during, or just after birth.
Most children with cerebral palsy appear healthy in the first few
months of life, but as the nervous system develops, motor
disturbances become progressively more noticeable. Common
symptoms include spasticity, an exaggerated contraction of muscles
when they are stretched; dyskinesia, involuntary extraneous
movements such as tremors and uncontrollable jerky twists (athetoid
movements); and rigidity, or resistance to passive movement.
Everyday movements are abnormal, and the affected person may be
confined to a wheelchair.
As a means for investigating the relationship between brain
development and susceptibility to brain injury, investigators can use
an MRI-derived baby connectome that maps changing brain
connections during development. A connectome is a comprehensive
map of the structural connectivity (the physical wiring) of an
organism’s nervous system. The baby connectome can reveal
developmental abnormalities in brain connections even at very early
ages, thus expanding the time window to initiate therapeutic
strategies (Castellanos et al., 2014).
Motor Cortex
In 1870, two Prussian physicians, Gustav Fritsch and Eduard Hitzig,
discovered that they could electrically stimulate the neocortex of an
anesthetized dog to produce movements of the mouth, limbs, and paws
on the opposite side of the dog’s body. They provided the first direct
evidence that the neocortex controls movement. Later researchers
confirmed the finding by experimenting with a variety of animals as
subjects, including rats, monkeys, and apes.
Section 4-1 describes the milestones that led to understanding how
the nervous system uses electrical charge to convey information.
Based on this research background, beginning in the 1930s Wilder
Penfield (Penfield & Boldrey, 1958) used electrical stimulation to map
the cortices of conscious human patients who were about to undergo
neurosurgery. Penfield’s aim was to use the results to assist in surgery.
He and his colleagues confirmed that movements in humans are
triggered mainly in response to stimulation of the premotor and primary
motor cortices.
Figure 10-19 shows Penfield using brain stimulation to map the
cortex.
Mapping the Motor Cortex
Penfield summarized his results by drawing cartoons of body parts to
represent the areas of the motor cortex that produce movement in those
parts. The result was a homunculus (pl. homunculi ; Latin for little
person ) that could be spread out across the primary motor cortex, as
illustrated in Figure 11-6 . Because the body is symmetrical, an
equivalent motor homunculus is discernible in the primary motor cortex
of each hemisphere, and each motor cortex mainly controls movement in
the opposite side of the body. Penfield also identified another, smaller
motor homunculus in the dorsal premotor area of each frontal lobe, a
region sometimes referred to as the supplementary motor cortex.
The striking feature of the homunculus shown in Figure 11-7 is the
disproportionate relative sizes of its body parts compared with the
relative sizes of actual parts of the human body. The homunculus has
huge hands with an especially large thumb. Its lips and tongue are also
prominent. By contrast, the trunk, arms, and legs—most of the area of a
real body—are small.
These distortions illustrate that extensive areas of M1 allow precise
regulation of the hands, fingers, lips, and tongue (see Figure 11-6 ). Body
areas over which we have relatively little motor control have a much
smaller representation in the motor cortex.
Another curious feature of the homunculus as laid out across the
motor cortex is that the body parts are discontinuous—arranged
differently from those of an actual body. The cortical area that produces
eye movements is in front of the homunculus head on the motor cortex
(see the top drawing in Figure 11-6 ), and the head is oriented with the
chin up and the forehead down (bottom drawing). The tongue is below
the forehead.
FIGURE 11-6 Penfield’s Homunculus Movements are topographically organized
in M1. Stimulation of dorsal medial regions produces movements in the lower limbs.
Stimulation in ventral regions of the cortex produces movements in the upper body,
hands, and face.
Modeling Movement
The motor homunculus shows at a glance that relatively large areas of
the brain control the body parts we use to make the most skilled
movements—our hands, mouth, and eyes. This makes it useful for
understanding M1’s topographic organization (functional layout).
Debate over how the motor areas represented by Penfield’s homunculus
might produce movement has been considerable.
homunculus Representation of the human body in the sensory or
motor cortex; also any topographical representation of the body by a
neural area.
topographic organization Neural spatial representation of the body
or areas of the sensory world perceived by a sensory organ.
An early idea was that each part of the homunculus controls muscles
in that part of the body. Information from other cortical regions could be
sent to the motor homunculus, and neurons in the appropriate part of the
homunculus could then activate body muscles required for producing the
movement. If you wanted to pick up a coin, for example, messages from
the M1 finger area would instruct the fingers. More recent experiments
suggest that the motor cortex represents not muscles but rather a
repertoire of fundamental movement categories (Grazaino, 2006).
The drawings in Figure 11-8 illustrate several movement categories
elicited in monkeys by electrical stimulation. They include (A) ascend,
descend, or jump, (B) reach to clasp,
(C) defensive posture or expression, (D) hand toward mouth, (E)
masticate or lick, (F) control centrally, and (G) control distally. Whole-
body movements are elicited from premotor cortex, and more precise
hand and mouth movements are elicited from M1. All these
movements occur only when the electrical stimulation lasts long
enough for the movement to take place.
Each observed movement has the same end regardless of the starting
location of a monkey’s limb or its other ongoing behavior. Electrical
stimulation that results in the hand coming to the mouth always recruits
the hand. If a weight is attached to the monkey’s arm, the evoked
movement compensates for the added load.
But categorized movements are inflexible: when an obstacle is placed
between the hand and the mouth, the hand hits the obstacle. If
stimulation continues after the hand has reached the mouth, the hand
remains there for the duration of the stimulation. Further, broad
movement categories—for example, reaching—cluster together on the
motor cortex, but reaching directed to different parts of space is elicited
from slightly different cortical points in the topographic reaching map.
FIGURE 11-8 Natural Movement Categories Movement
categories evoked by electrical stimulation of the monkey cortex and the
primary motor and premotor regions from which the categories were
elicited. Research from M. S. A. Graziano and T. N. Aflalo (2007). Mapping behavioral
repertoire onto the cortex. Neuron 56 : Figure 5, p. 243.
MRI studies on human subjects suggest that the human motor cortex,
like the monkey motor cortex, is organized in terms of functional
movement categories (Meier et al., 2008). The motor cortex maps appear
to represent basic types of movements that learning and practice can
modify. In other words, the motor cortex encodes not muscle twitches
but a lexicon, or dictionary, of movements. As with words and sentences,
these few movements used in different combinations produce all the
movements you are capable of, even in activities as complex as playing
basketball.
EXPERIMENT 11-2
Results
Conclusion: The motor cortex takes part in planning movement,
executing movement, and adjusting the force and duration of a
movement.
Research from E. V. Evarts (1968). Relation of Pyramidal Tract Activity to Force Exerted
During Voluntary Movement. Journal of Neurophysiology, 31, p. 15.
Corticospinal Tracts
The main efferent pathways from the motor cortex to the brainstem to the
spinal cord are the corticospinal tracts. The axons from these tracts
originate mainly in motor cortex layer V pyramidal cells but also extend
from the premotor cortex and the sensory cortex (see Layering in the
Neocortex on page 359 ). The axons descend into the brainstem, sending
collaterals to numerous brainstem nuclei, and eventually emerge on the
brainstem’s ventral surface, where they form a large bump on each side.
These bumps, or pyramids, give the corticospinal tracts their alternative
name, the pyramidal tracts.
At this point, some axons descending from the left hemisphere cross
over to the right side of the brainstem. Likewise, some axons descending
from the right hemisphere cross over to the left side of the brainstem. The
remaining axons stay on their original side. This division produces two
corticospinal tracts, one crossed and the other uncrossed, entering each
side of the spinal cord. Figure 11-9 illustrates the division of tracts
originating in the left-hemisphere cortex. The dual tracts on each side of
the brainstem descend into the spinal cord, forming the two spinal cord
tracts.
The cross section of a spinal cord in Figure 11-10 shows the location
of the two tracts, on the left and right sides. The fibers that cross to the
opposite side of the brainstem descend the spinal cord in a lateral (side)
position to form the lateral corticospinal tract. The fibers that remain on
their original side continue from the brainstem down the spinal cord in an
anterior (front) position, to form the anterior corticospinal tract.
Retracing the pathway, the corticospinal tracts originate in the
neocortex and terminate in the spinal cord. Within the spinal cord,
corticospinal fibers make synaptic connections with both interneurons and
motor neurons, but the motor neurons carry all nervous system commands
out to the muscles.
FIGURE 11-10 Motor Tract Organization Interneurons and motor
neurons in the left and right anterior spinal cord tracts are topographically
arranged: the more lateral neurons innervate more distal parts of the limbs
(those farther from the midline), and the more medial neurons innervate
more proximal body muscles (those closer to the midline).
Motor Neurons
Spinal cord motor neurons are located in the anterior part of the spinal
cord, the anterior horns, which jut out from the anterior part of the spinal
cord. The anterior horns contain two kinds of neurons. Interneurons lie
just medial to the motor neurons and project onto them. The motor
neurons send their axons to the body muscles. The fibers from the
corticospinal tracts make synaptic connections with both the interneurons
and the motor neurons, but the motor neurons carry all nervous system
commands to the muscles.
Figure 11-10 shows that a homunculus of the body is represented again
in the spinal cord. The more lateral motor neurons project to muscles that
control the fingers and hands, whereas intermediate motor neurons project
to muscles that control the shoulders and arms. The most medial motor
neurons project to muscles that control the body’s trunk. Axons of the
lateral corticospinal tract connect mainly with the lateral interneurons and
motor neurons, and axons of the anterior corticospinal tract connect
mainly to the medial interneurons and motor neurons.
To visualize how the cortical regions responsible for different
movements relate to the motor neuron homunculus in the spinal cord,
look again at Figure 11-9 . Place your finger on the index finger region of
the motor homunculus on the left side of the brain. If you trace the axons
of the cortical neurons downward, your route takes you through the
brainstem, across its midline, and down the right lateral corticospinal
tract.
The journey ends at the interneurons and motor neurons in the most
lateral region of the spinal cord’s right anterior horn—the horn on the
opposite (contralateral) side of the nervous system from which you began.
Following the axons of these motor neurons, you find that they synapse
on muscles that move the right index finger.
If you repeat the procedure by tracing the pathway from the trunk area
of the motor homunculus, near the top on the left side of the brain, you
follow the same route through the upper part of the brainstem. You do not
cross over to the opposite side, however. Instead, you descend into the
spinal cord on the left side, the same (ipsilateral) side of the nervous
system on which you began, eventually ending up in the most medial
inter-neurons and motor neurons of the left side’s anterior horn. (At this
point, some of these axons also cross over to the other side of the spinal
cord.) Thus, if you follow these motor neuron axons, you end up at their
synapses with the muscles that move the trunk on both sides of the body.
This visualization can help you remember the routes taken by motor
system axons. The limb regions of the motor homunculus contribute most
of their fibers to the lateral cortico-spinal tract, the fibers that cross over
to the opposite side of the spinal cord. They activate motor circuits that
move the arm, hand, leg, and foot on the opposite side of the body. In
contrast, the trunk regions of the motor homunculus contribute their fibers
to the anterior corticospinal tract. Only a few of these fibers cross, close
to their termination in the spinal cord; most control the trunk and limbs on
the same side of the body.
If you are right-handed, the neurons your brain is using to carry out
this task are the same neurons that you are tracing.
constraint-induced therapy Procedure in which restraint of a
healthy limb forces a patient to use an impaired limb to enhance
recovery of function.
corticospinal tract Bundle of nerve fibers directly connecting the
cerebral cortex to the spinal cord, branching at the brainstem into an
opposite-side lateral tract that informs movement of limbs and digits
and a sameside anterior tract that informs movement of the trunk;
also called pyramidal tract.
The interneurons and motor neurons of the spinal cord are envisioned
as a homunculus representing the muscles that they innervate.
Remember that the motor cortex is organized in terms of functional
movement categories, such as reaching or climbing (see Figure 11-8 ). A
similar template in the spinal cord ensures that instructions from the
motor cortex are reproduced faithfully. Presumably, lateral interneurons
produce acts of reaching or bringing the hand to the mouth, and the
medial interneurons and motor neurons produce whole-body movements,
including walking. Recall that a spinal cord isolated from the brain by a
cut is capable of many kinds of movements, and it is able to do so
because the movements are organized by its interneurons and motor
neurons.
In addition to the corticospinal pathways, about 24 other pathways
from the brainstem to the spinal cord carry instructions, such as
information related to posture and balance (see Section 11-4 ), and they
control the enteric nervous system as well as portions of the sympathetic
division of the ANS. Remember that for all these functions, the motor
neurons are the final common path.
FIGURE 11-11 Coordinating Muscle Movement
Control of Muscles
Spinal cord motor neurons synapse on the muscles that control body
movements. For example, the biceps and triceps of the upper arm control
movement of the lower arm. Limb muscles are arranged in pairs, as
shown in Figure 11-11 . One member of a pair, the extensor, moves
(extends) the limb away from the trunk. The other member of the pair, the
flexor, moves (flexes) the limb in toward the trunk. Experiment 11-2 on
page 368 demonstrates the on–off responses of cortical motor neurons,
depending on whether the flexor or extensor muscle is being used.
Connections between spinal cord interneurons and motor neurons
ensure that the muscles work together so that when one muscle contracts,
the other relaxes. Thus, the spinal cord interneurons and motor neurons
not only relay instructions from the brain but also, through their
connections, cooperatively organize the movement of many muscles. As
you know, the neurotransmitter at the motor neuron–muscle junction is
acetylcholine.
Figure 4-26 illustrates ACh action at a motor neuron–muscle
junction.
11-2 REVIEW
Motor System Organization
Before you continue, check your understanding.
1 . The ___________ organization of the motor cortex is represented by
a ___________, in which parts of the body that are capable of the most
skilled movements (especially the mouth, fingers, and thumbs) are
regulated by ___________ cortical regions.
2 . Change can take place in the cortical ___________ to aid in recovery
of function after motor cortex injury.
3 . Instructions regarding movement travel out from the motor cortex
through the ___________ tracts to terminate on interneurons that
project to motor neurons in the anterior horn of the spinal cord. Many
corticospinal-tract fibers cross to the opposite side of the spinal cord to
form the ___________ tracts; some stay on the same side to form the
___________ tracts.
4 . The anterior corticospinal tracts carry instructions for ___________
movements, whereas the lateral corticospinal tracts carry instructions
for ___________ and ___________ movements.
5 . Motor neuron axons in the spinal cord carry instructions to
___________ that are arranged in pairs. One ___________ a limb; the
other ___________ the limb.
6 . What does the plan of movements in the motor cortex as revealed by
electrical stimulation tell us about the brain’s representation of
movement?
Answers appear at the back of the book.
Movement
The main evidence that the basal ganglia and the cerebellum perform
motor functions is that damage to either structure impairs movement.
Both have extensive connections with the motor cortex, which further
suggests their participation in movement. After an overview of each
structure’s anatomy, we look at some symptoms that arise after damage to
the basal ganglia or the cerebellum. Then we consider the roles each
structure plays in controlling movement.
Tourette Syndrome
The neurological disorder Tourette syndrome (TS) was first
described in 1885 by Georges Gilles de la Tourette, a French
neurologist, who described the symptoms as they appeared in
Madame de D., one of his patients:
Madame de D., presently age 26, at the age of 7 was afflicted by convulsive movements
of the hands and arms. These abnormal movements occurred above all when the child
tried to write, causing her to crudely reproduce the letters she was trying to trace. After
each spasm, the movements of the hand became more regular and better controlled until
another convulsive movement would again interrupt her work. She was felt to be
suffering from over-excitement and mischief, and because the movements became more
and more frequent, she was subject to reprimand and punishment. Soon it became clear
that these movements were indeed involuntary and convulsive in nature. The
movements involved the shoulders, the neck, and the face, and resulted in contortions
and extraordinary grimaces. As the disease progressed, and the spasms spread to involve
her voice and speech, the young lady made strange screams and said words that made
no sense. (Friedhoff & Chase, 1982)
What features of the reciprocal basal ganglia loops allow for selecting
movements or modulating movement force? One theory holds that the
basal ganglia can influence whether movement occurs (Friend & Kravitz,
2014). As illustrated in Figure 11-13 , a pathway (green) from the
thalamus to the cortex to the spinal cord produces movement. The globus
pallidus (red) can inhibit this pathway at the level of the thalamus.
The globus pallidus is controlled by two basal ganglia pathways, one
indirect and one direct. If the globus pallidus is excited, it in turn inhibits
the thalamus and blocks movement. If it is inhibited, motor cortex
circuits that include the thalamus are able to produce movement. The
globus pallidus thus acts like a volume control. If it is turned down,
movement can occur; if it is turned up, movement is blocked. This model
proposes that diseases of the basal ganglia affecting its “volume control”
function impair movement so that it is either excessive or slowed.
The idea that the globus pallidus acts like a volume control is the
basis for several treatments for Parkinson disease. Consistent with the
volume hypothesis, recordings made from globus pallidus cells show
excessive activity, which inhibits movement in people with Parkinson
disease. If the globus pallidus or the subthalamic nucleus (a relay in the
indirect pathway) is partially surgically destroyed in Parkinson patients,
muscular rigidity is reduced and normal movement is improved.
Similarly, deep brain stimulation (DBS) of the globus pallidus
inactivates it, freeing movement.
Interestingly, impairments in the application of force may underlie
motor disorders of skilled movements such as writer’s cramp, one of a
number of impairments called selective dystonias. One such impairment,
the yips, is distorted execution of skilled movements by professional
athletes. For example, the yips have ended the career of many
professional golfers by ruining the player’s swing (Belton et al., 2014).
Another structure in the basal ganglia, the nucleus accumbens, is also
called the ventral striatum, because it is the most ventral basal ganglia
nucleus. The nucleus accumbens receives projections from dopamine
cells of the ventral tegmental area, a nucleus just medial to the substantia
nigra in the midbrain. Called the mesolimbic dopamine pathway, it is a
part of a loop that aids our perception of cues signaling reward.
FIGURE 11-13 Regulating Movement Force Two pathways in
the basal ganglia modulate movements produced in the cortex. Green
pathways are excitatory; red are inhibitory. The indirect pathway excites
the globus pallidus internal, whereas the direct pathway has an inhibitory
effect. If activity in the indirect pathway dominates, the thalamus shuts
down, and the cortex is unable to produce movement. If direct-pathway
activity dominates, the thalamus can become overactive, amplifying
movement. Information from R. E. Alexander & M. D. Crutcher (1990). Functional
architecture of basal ganglia circuits: Neural Substrates of parallel processing. Trends in
Neuroscience, 13, p. 269.
Results
Pathways
The motor system produces movement, but without sensation, movement
would lack direction and quickly become impaired. The somatosensory
system is indispensable for movement. The body senses tell us about the
physical contact we make with the world as well as how successful our
physical interactions with the world are.
Somatic sensation is unique among sensory systems. For the most
part, it is distributed throughout the body, not localized in the head, as
are vision, hearing, taste, and smell. Somatosensory receptors found in
the skin, muscles, and internal organs, including the circulatory system,
feature specialized dendritic attachments on sensory neurons, or the
dendrites themselves are the sensory receptors. Somatosensory neurons
convey information to the spinal cord and brain. One part of the system,
however, is confined to a single organ. The inner ear houses the
vestibular system, which contributes to our sensations of balance and
head movement.
In considering the motor system, we started at the cortex and followed
the motor pathways out to the spinal cord (review Figures 11-9 and 11-
10 ). This efferent route follows the outward flow of neural instructions
regarding movement. As we explore the somatosensory system, we
proceed in the opposite direction, because afferent sensory information
flows inward, from the body’s sensory receptors through sensory
pathways in the spinal cord to the cortex.
Somatosensory System
Spinal Reflexes
Not only do somatosensory nerve fibers convey information to the
cortex, they also participate in behaviors mediated by the spinal cord and
brainstem. Spinal cord somatosensory axons, even those ascending the
posterior columns, give off axon collaterals that synapse with
interneurons and motor neurons on both sides of the spinal cord. The
circuits made between sensory neurons and muscles through these
connections mediate spinal reflexes.
The simplest spinal reflex is formed by a single synapse between a
sensory neuron and a motor neuron. Figure 11-20 illustrates such a
monosynaptic reflex, the knee jerk. It affects the quadriceps muscle of
the thigh, which is anchored to the leg bone by the patellar tendon. When
the lower leg hangs free and this tendon is tapped with a small hammer,
the quadriceps muscle is stretched, activating the stretch-sensitive
sensory receptors embedded in it.
The sensory receptors then send a signal to the spinal cord through
sensory neurons that synapse with motor neurons projecting to the same
thigh muscle. The discharge from the motor neurons stimulates the
muscle, causing it to contract to resist the stretch. Because the tap is
brief, the stimulation is over before the motor message arrives, and the
muscle contracts even though it is no longer stretched. This contraction
pulls the leg up, producing the knee jerk reflex.
Somatosensory Homunculus
In his studies of human patients undergoing brain surgery, Wilder
Penfield electrically stimulated the somatosensory cortex and recorded
patients’ responses. Stimulation at some sites elicited sensations in the
foot; stimulation of other sites produced sensations in a hand, the trunk,
or the face. By mapping these responses, Penfield was able to construct a
somatosensory homunculus in the cortex, shown in Figure 11-25 A . The
sensory homunculus looks nearly identical to the motor homunculus
shown in Figure 11-6 in that the most sensitive areas of the body are
accorded relatively large cortical areas.
Using smaller electrodes and more precise recording techniques in
monkeys, Jon Kaas (1987) found that Penfield’s homunculus could be
subdivided into a series of smaller homunculi. When Kaas stimulated
sensory receptors on the body and recorded the activity of cells in the
sensory cortex, he found that the somatosensory cortex comprises four
representations of the body. Each is associated with a class of sensory
receptors.
The progression of these representations across S1 from front to back
is shown in Figure 11-25 B. Area 3a cells are responsive to muscle
receptors; area 3b cells are responsive to slow-responding skin receptors.
Area 1 cells are responsive to rapidly adapting skin receptors, and area 2
cells are responsive to deep tissue pressure and joint receptors. In another
study, Hiroshi Asanuma (1989) and his coworkers found still another
sensory representation in the motor cortex (area 4) in which cells
respond to muscle and joint receptors.
(A) Penfield’s single-homunculus model
Somatosensory Cortex
Perceptions constructed from elementary sensations depend on
combining the sensations. This combining takes place as areas 3a and 3b
project onto area 1, which in turn projects onto area 2. Whereas a cell in
area 3a or 3b may respond to activity in only a certain area on a certain
finger, for example, cells in area 1 may respond to similar information
from a number of fingers.
At the next level of synthesis, cells in area 2 respond to stimulation in
a number of locations on a number of fingers as well as to stimulation
from different kinds of somatosensory receptors. Thus, area 2 contains
multimodal neurons responsive to force, orientation, and direction of
movement. We perceive all these properties when we hold an object in
our hands and manipulate it.
With each successive information relay, both the size of the pertinent
receptive fields and the synthesis of somatosensory modalities increase.
The segregation of sensory neuron types at the level of the cortex is
likely the basis for our ability to distinguish among different kinds of
sensory stimuli coming from different sources. For example, we
distinguish between tactile stimulation on the surface of the skin, which
is usually produced by some external agent, and stimulation coming
from muscles, tendons, and joints, which is usually produced by our own
movements.
At the same time, we perceive the combined sensory properties of a
stimulus. For instance, when we manipulate an object, we know the
object both by its sensory properties, such as temperature and texture,
and by the movements we make as we handle it. Thus, the cortex
provides for somatosensory synthesis too. The tickle sensation seems
rooted in an other-versus-us somatosensory distinction, as described in
Research Focus 11-6 , Tickling, on page 392 .
Research by Vernon Mountcastle (1978) shows that cells in the
somatosensory cortex are arranged in functional columns running from
layer I to layer VI, similar to the functional columns found in the visual
cortex. Every cell in a functional somatosensory cortical column
responds to a single class of receptors. Some columns are activated by
rapidly adapting skin receptors, others by slowly adapting skin receptors,
still others by pressure receptors, and so forth. All neurons in a
functional column receive information from the same local skin area. In
this way, neurons lying within a column seem to be an elementary
functional unit of the somatosensory cortex.
See Section 9-1 for details on receptive fields.
Figure 9-33 shows functional column organization in V1.
RESEARCH FOCUS 11-6
Tickling
Everyone knows the effects and consequences of tickling. The
perception is a curious mixture of pleasant and unpleasant
sensations. The two kinds of tickling are kinismesis, the sensation
from a light caress, and gargalesis, the pleasurable effect of hard
rhythmic probing.
The tickle sensation is felt not only by humans but also by other
primates and by cats, rats, and probably most mammals. Play in rats
is associated with 50-kilohertz vocalizations, and tickling body
regions that are targets of the rats’ own play also elicits 50-kilohertz
vocalizations (Panksepp, 2007).
Tickling is rewarding because people and animals solicit tickles
from others. They even enjoy observing others being tickled. Using a
robot and brain imaging techniques, Sarah Blakemore and her
colleagues (1998) explained why we cannot tickle ourselves.
Blakemore had participants deliver two kinds of identical tactile
stimuli to the palms of their hand. In one condition, the stimulus was
predictable and in the other a robot introduced an unpredictable
delay in the stimulus. Only the unpredictable stimulus was perceived
as a tickle. Thus, it is not the stimulation itself but its
unpredictability that accounts for the tickle perception. This is why
we cannot tickle ourselves. Yet Windt and associates (2015), using a
self-report method, find that during lucid dreams, the self–other
distinction is absent: people do dream that they tickle themselves.
One interesting feature of tickling is the distinctive laughter it
evokes. This laughter can be identified by sonograms (sound
analysis), and people can distinguish tickle-related laughter from
other forms of laughter.
Intrigued by findings that all apes appear to laugh during tickling,
Ross and coworkers (2009) compared tickle-related laughter in apes
and found that human laughter is more similar to chimpanzee
laughter than to the laughter of gorillas and other apes. We humans
thus have inherited from our common ape ancestors both a
susceptibility to tickling and laughter as well.
LWA-Dann Tardif/CORBIS
FIGURE 11-27 Visual Aid Section 9-4 explains how visual information
from the dorsal and ventral streams contributes to movement.
apraxia Inability to make voluntary movements in the absence of
paralysis or other motor or sensory impairment, especially an
inability to make proper use of an object.
11-5 REVIEW
Exploring the Somatosensory Cortex
Before you continue, check your understanding.
1 . The ___________ somatosensory cortex, arranged as a series of
homunculi, feeds information to the ___________ somatosensory
cortex, which produces somatosensory perception.
2 . Damage to the secondary somatosensory cortex produces
___________, an inability to complete a series of movements.
3 . The somatosensory cortex provides information to the ___________
stream to produce unconscious movements and also provides
information to the ___________ stream for conscious recognition of
objects.
4 . Explain briefly what phantom limb pain tells us about the brain.
Answers appear at the back of the book.
12
Olfaction
Olfaction is the most puzzling sensory system. We can discriminate
thousands of odors, yet we have great difficulty finding words to describe
what we smell. We may like or dislike smells or compare one smell to
another, but we lack a vocabulary for olfactory perceptions.
Wine experts rely on olfaction to tell them about wines, but they must
learn to use smell to do so. Training courses in wine sniffing typically run
one full day per week for a year, and most participants still have great
difficulty passing the final test. This degree of difficulty contrasts with
that of vision and audition, senses designed to analyze the specific
qualities of sensory input, such as pitch in audition or color in vision. In
contrast, olfaction seems designed to discriminate whether information is
safe or familiar—is the smell from an edible food? from a friend or a
stranger?—or identifies a signal, perhaps from a receptive mate.
Receptors for Smell
Conceptually, identifying chemosignals is similar to identifying other
sensory stimuli (light, sound, touch). But rather than converting physical
energy such as light or sound waves into receptor potentials, scent
interacts with chemical receptors. This constant chemical interaction must
be tough on the receptors: in contrast with receptors for light, sound, and
touch, chemical receptors are constantly being replaced. The life of an
olfactory receptor neuron is about 60 days.
The receptor surface for olfaction, illustrated in Figure 12-3 , is the
olfactory epithelium in the nasal cavity. The epithelium is composed of
receptor cells and support cells. Each receptor cell sends a process ending
in 10 to 20 cilia into a mucous layer, the olfactory mucosa. Chemicals in
the air we breathe dissolve in the mucosa to interact with the cilia. If an
olfactory chemosignal affects the receptors, metabotropic activation of a
specific G protein leads to an opening of sodium channels and a change in
membrane potential.
Figure 5-15 A illustrates activity in such a metabotropic receptor.
The epithelial receptor surface varies widely across species. In
humans, this area is estimated to range from 2 to 4 square centimeters; in
dogs, about 18 square centimeters; and in cats, about 21 square
centimeters. No wonder our sensitivity to odors is less acute than that of
dogs and cats: they have 10 times as much receptor area as humans have!
Roughly analogous to the tuning characteristics of cells in the auditory
system, olfactory receptor neurons in vertebrates do not respond to
specific odors but rather to a range of odors.
How does a limited number of receptor types allow us to smell many
different odors? The simplest explanation is that any given odorant
stimulates a unique pattern of receptors, and the summed activity or
pattern of activity produces our perception of a particular odor.
Analogously, the visual system enables us to identify several million
colors with only three receptor types in the retina: the summed activity of
the three cones leads to our richly colored life.
Gustation
Research reveals significant differences in taste preferences both between
and within species. Humans and rats like sucrose and saccharin solutions,
but dogs reject saccharin and cats are indifferent to both, inasmuch as
they do not detect sweetness at all. The failure of cats to taste sweet may
not be surprising: they are pure carnivores, and nothing that they normally
eat is sweet.
Within the human species, clear differences in taste thresholds and
preferences are obvious. An example is the preference for or dislike of
bitter tastes—the flavor of brussels sprouts, for instance. People tend to
love them or hate them. Linda Bartoshuk (2000) showed absolute
differences among adults: some perceive certain tastes as very bitter,
whereas others are indifferent to them. Presumably, the latter group is
more tolerant of brussels sprouts.
Sensitivity to bitterness is related to genetic differences in the ability to
detect a specific bitter chemical (6-n-propylthiouracil, or PROP). PROP
bitterness associates with allelic variation in the taste receptor gene
TAS2R38. People able to detect minute quantities of PROP find the taste
extremely bitter; they are sometimes called supertasters. Those who do
not taste PROP as very bitter are nontasters. The advantage of being a
supertaster is that many bitter “foods” are poisonous. The disadvantage is
that supertasters avoid many nutritious fruits and vegetables that they find
bitter.
Valerie Duffy and her colleagues (2010) investigated sensitivity to
quinine (usually perceived as bitter) in participants who were assessed for
the TAS2R38 genotype. They estimated participants’ taste bud density by
counting the number of papillae (the little bumps on the tongue). Quinine
was reported as more bitter to those who tasted PROP as very bitter or to
those who had more taste buds. Thus, detection of bitterness is related
both to TAS2R38 and to tongue anatomy.
Nontasters, either by genotype or phenotype (for few taste buds),
reported greater consumption of vegetables, both bitter and not.
Nontasters with higher numbers of taste buds reported eating about 25%
more vegetables than the other groups. These data suggest that genetic
variation in taste can explain differences in overall consumption of all
types of vegetables. It also suggests that persuading supertasters to eat
more healthy foods—that is, vegetables—may prove difficult.
Differences in taste thresholds also emerge as we age. Children are
much more responsive to taste than adults and are often intolerant of
spicy foods because they have more taste receptors than adults have. It is
estimated that by age 20, humans have lost at least 50 percent of their
taste receptors. No wonder children and adults have different food
preferences.
Receptors for Taste
Taste receptors are found in taste buds on the tongue, under the tongue, on
the soft palate on the roof of the mouth, on the sides of the mouth, and at
the back of the mouth on the nasopharynx. Each of the five taste receptor
types responds to a different chemical component in food. The four most
familiar are sweet, sour, salty, and bitter. The fifth type, called the umami
receptor, is especially sensitive to glutamate, a neurotransmitter molecule,
and perhaps to nucleotides. (Nucleotide bases are the structural units of
nucleic acids such as DNA and RNA.)
Taste receptors are grouped into taste buds, each containing several
receptor types, as illustrated in Figure 12-6 . Gustatory stimuli interact
with the receptor tips, the microvilli, to open ion channels, leading to
changes in membrane potential. At its base, the taste bud contacts the
branches of afferent cranial nerve 7 (facial), 9 (glossopharyngeal), or 10
(vagus).
FIGURE 12-6 Anatomy of a Taste Bud Information from D. V. Smith & G.
M. Shepherd (2003). Chemical senses: Taste and olfaction. In L. R. Squire, F. E. Bloom, S. K.
McConnell, J. L. Roberts, N. C. Spitzer, & M. J. Zigmond (Eds.), Fundamental Neuroscience
(2nd ed., pp. 631–667), New York: Academic Press.
Gustatory Pathways
Cranial nerves 7, 9, and 10 form the main gustatory nerve, the solitary
tract. On entering the brainstem, the tract splits, as illustrated in Figure
12-7 . One route (traced in red) travels through the posterior medulla to
the ventroposterior medial nucleus of the thalamus. This nucleus in turn
sends out two pathways, one to the primary somatosensory cortex (S1)
and the other to the primary gustatory cortex of the insula, a region just
rostral to the secondary somatosensory cortex (S2).
The gustatory region in the insula is dedicated to taste, whereas S1 is
also responsive to tactile information and is probably responsible both for
localizing tastes on the tongue and for our reactions to a food’s texture.
The gustatory cortex sends a projection to the orbital cortex in a region
near the input from the olfactory cortex. Neuroimaging studies suggest
that the mixture of olfactory and gustatory input in the orbital cortex gives
rise to our perception of flavor. Ambience, including music and light, also
affects this region of orbital cortex, increasing blood flow and so
enhancing our experience of flavor.
A meta-analysis of noninvasive imaging studies that demonstrate taste-
responsive brain regions, conducted by Maria Veldhuizen and colleagues
(2011), also shows a brain asymmetry. The investigators conclude that
areas in the right orbital cortex mediate the pleasantness of tastes,
whereas the same region in the left hemisphere mediates the
unpleasantness of tastes. Whereas the insula identifies the nature and
intensity of flavors, the OFC evaluates the affective properties of tastes.
The second pathway from the gustatory nerve (shown in blue in Figure
12-7 ) projects via the nucleus of the solitary tract in the brainstem to the
hypothalamus and amygdala. Researchers hypothesize that these inputs
somehow participate in feeding behavior, possibly evaluating the
pleasantness and strength of flavors.
FIGURE 12-7 Gustatory Pathways
12-2 REVIEW
The Chemical Senses
Before you continue, check your understanding.
1 . The receptor surface for olfaction is the _________.
2 . Olfactory and gustatory pathways eventually merge in the
orbitofrontal cortex, leading to the perception of ___________.
3 . Chemosignals that convey information about the sender are called
___________.
4 . The perception of bitterness is related to both the ___________ and
the ___________.
5 . How do a relatively limited number of receptor types allow us to
smell a trillion different odors?
Answers appear at the back of the book.
Behavior
Odor and taste play a fundamental role in the biology of emotional and
motivated behavior. Why does the sight or smell of a bird or a mouse
trigger stalking and killing in a cat? Why does the human body stimulate
sexual interest? We can address such questions by investigating the
evolutionary and environmental influences on brain circuit activity that
contribute to behavior.
Older men and younger women are most likely to exhibit the
mutually desired set of traits, which leads to a universal tendency for age
differences between mates. Although the idea is controversial, Buss
argues that these preferences are a product of natural selection in a Stone
Age environment, when women and men would have faced different
daily problems and thus would have developed separate adaptations
related to mating.
Evolutionary theory cannot account for all human behavior, perhaps
not even homicide or mate selection. By casting an evolutionary
perspective on the neurological bases of behavior, though, evolutionary
psychologists can generate intriguing hypotheses about how natural
selection might have shaped the brain and behavior.
Section 1-2 reviews Darwin’s theory, materialism, and
contemporary perspectives on brain and behavior.
The environment does not always change the brain. A case in point
can be seen again in pigeons. A pigeon in a Skinner box can quickly
learn to peck a disc to receive a bit of food, but it cannot learn to peck a
disc to escape from a mild electric shock to its feet. Why not? Although
the same simple pecking behavior is being rewarded, apparently the
pigeon’s brain is not prewired for this second kind of association. The
bird is prepared genetically to make the first association, for food, but
not prepared for the second. This makes adaptive sense: typically, it flies
away from noxious situations.
John Garcia and R. A. Koelling (1966) were the first psychologists to
demonstrate the specific nature of this range of behavior–consequence
associations that animals are able to learn. Garcia observed that farmers
in the western United States are constantly shooting at coyotes for
attacking lambs, yet despite the painful consequences, the coyotes never
seem to learn to stop killing lambs in favor of safer prey. The reason,
Garcia speculated, is that a coyote’s brain is not prewired to make this
kind of association.
So Garcia proposed an alternative to deter coyotes from killing lambs
—an association that a coyote’s brain is prepared to make: the
connection between eating something that makes one sick and avoiding
that food in the future. Garcia gave the coyotes a poisoned lamb carcass,
which sickened but did not kill them. With only one pairing of lamb and
illness, most coyotes learned not to eat sheep for the rest of their lives.
Many humans have similarly acquired food aversions because a
certain food’s taste—especially a novel taste—was subsequently paired
with illness. This learned taste aversion is acquired even when the food
eaten is in fact unrelated to the later illness. As long as the taste and the
nausea are paired in time, the brain is prewired to connect them.
learned taste aversion Acquired association between a specific
taste or odor and illness; leads to an aversion to foods that have the
taste or odor.
One of us ate his first Caesar salad the night before coming down
with a stomach flu. A year later, he was offered another Caesar salad
and, to his amazement, felt ill just at the smell of it. Even though he
knew that the salad had not caused his earlier illness, he nonetheless had
formed an association between the novel flavor and the illness. This
strong and rapid associative learning makes adaptive sense. Having a
brain that is prepared to make a connection between a novel taste and
subsequent illness helps an animal avoid poisonous foods and so aids in
its survival. A curious aspect of taste aversion learning is that we are
unaware of having formed the association until we encounter the taste
and/or smell again.
Section 14-4 has more on how our brains are wired to link unrelated
stimuli.
The fact that the nervous system is often prewired to make certain
associations but not to make others has led to the concept of
preparedness in learning theories. Preparedness can help account for
some complex behaviors. For example, if two rats are paired in a small
box and exposed to a mild electric shock, they will immediately fight
with one another, even though neither was responsible for the shock.
Apparently, the rat brain is predisposed to associate injury with nearby
objects or other animals. The extent to which we might extend this idea
to explain such human behaviors as bigotry and racism is an interesting
topic to ponder.
preparedness Predisposition to respond to certain stimuli
differently from other stimuli.
Why does a fly stop eating? A logical possibility is that its blood
sugar level rises to some threshold. If this were correct, injecting glucose
into the circulatory system of a fly would prevent the fly from eating.
But that does not happen. Blood glucose level has no effect on a fly’s
feeding. Furthermore, injecting food into the animal’s stomach or
intestine has no effect either. So what is left?
Flies have a nerve (the recurrent nerve) that extends from the neck to
the brain and carries information about whether any food is present in the
esophagus. If the recurrent nerve is cut, the fly is chronically hungry and
never stops eating. Such flies become so full and fat that their feet no
longer reach the ground, and they become so heavy that they cannot fly.
Even though a fly appears to act with a purpose in mind, a series of
very simple mechanisms actually control its behavior—mechanisms not
remotely related to our concept of thought or intent. Hunger is simply the
activity of the nerve. Clearly, we should not assume simply from
appearances that a behavior carries intent. Behavior can have very subtle
causes that do not include conscious purpose. How do we know that any
behavior is purposeful? That question turns out to be difficult to answer.
12-3 REVIEW
Evolution, Environment, and Behavior
Before you continue, check your understanding.
1 . B. F. Skinner argued that behaviors could be shaped by ___________
in the environment.
2 . John Garcia used the phenomenon of ___________ to discourage
coyotes from killing lambs.
3 . The brain of a species is prewired to produce ___________ to
specific sensory stimuli selected by evolution to prompt certain
associations between events.
4 . When a fly wanders around on a table, it is not exploring so much as.
5 . Explain briefly how the concept of preparedness accounts for
puzzling human behaviors.
Answers appear at the back of the book.
Emotional Behavior
The neural circuits that control behavior encompass regions at all levels
of the brain, but the critical neural structures in emotional and motivated
behavior are the hypothalamus and associated pituitary gland, the limbic
system, and the frontal lobes. The expression of emotions includes
physiological changes: in heart rate, blood pressure, and hormone
secretions. It also includes motor responses, especially movements of the
muscles that produce facial expressions (see Figure 12-9 ). So much of
human life revolves around emotions that understanding them is central to
understanding our humanness.
But emotions are not restricted to people. A horse that is expecting
alfalfa for dinner will turn its nose up at grass hay and may stomp its front
feet and toss its head. Two dogs that are in competition for attention may
snap at one another. Charles Darwin interpreted such behaviors as
emotions in his classic book The Expression of the Emotions in Man and
Animals, published in 1872. We now know that emotional expression in
all mammals is related to activity in the limbic system and frontal lobes.
Although the hypothalamus plays a central role in controlling
motivated behavior, it takes its instructions from the limbic system and
the frontal lobes. The limbic and frontal regions project to the
hypothalamus, which houses many basic neural circuits for controlling
behavior and for autonomic processes that maintain critical body
functions within a narrow, fixed range—that is, homeostatic
mechanisms.
homeostatic mechanism Process that maintains critical body
functions within a narrow, fixed range.
Section 6-5 explores hormonal regulation of homeostatic
mechanisms.
In Figure 12-11 , the neck of a funnel represents the hypothalamus,
and the limbic system and frontal lobes form the funnel’s rim. To produce
behavior, the hypothalamus sends axons to other brainstem circuits. But
not all behavior is controlled via the funnel to the hypothalamus. Many
other routes to the brainstem and spinal cord bypass the hypothalamus,
among them projections from the motor cortex to the brainstem and
spinal cord. Thus it is primarily motivated behaviors that require
hypothalamic involvement.
FIGURE 12-11 Funneling Signals In this model, many inputs from the
frontal lobes and limbic system funnel through the hypothalamus, which
sends its axons to control brainstem circuits that produce motivated
behaviors.
Salt consumption
Waste elimination
Sex
Parenting
Aggression
Food preference
Curiosity
Reading
Nonregulatory Behaviors
Unlike regulatory behaviors, such as eating or drinking, nonregulatory
behaviors are neither required to meet the basic survival needs of an
animal nor controlled by homeostatic mechanisms. Thus, nonregulatory
behaviors include everything else we do—from sexual intercourse to
parenting to such curiosity-driven activities as conducting psychology
experiments.
nonregulatory behavior Behavior unnecessary to the animal’s basic
survival needs.
Some nonregulatory behaviors, such as sexual intercourse, entail the
hypothalamus, but most of them probably do not. Rather, such behaviors
entail a variety of forebrain structures, especially the frontal lobes.
Presumably, as the forebrain evolved and enlarged, so did our range of
nonregulatory behaviors.
Most nonregulatory behaviors are strongly influenced by external
stimuli. As a result, sensory systems must play some role in controlling
them. For example, the sexual behavior of most male mammals is
strongly influenced by the pheromone emitted by receptive females. If the
olfactory system is not functioning properly, we can expect abnormalities
in sexual behavior. We will return to sexual behavior in Section 12-5 , as
we investigate how a nonregulatory behavior is controlled. But first we
explore the brain structures that take part in motivated behaviors—
nonregulatory and regulatory.
(B)
FIGURE 12-17 Limbic Lobe Encircling the brainstem, the limbic lobe as
described by Broca consists of the cingulate gyrus and hippocampal
formation (the hippocampus and parahippocampal cortex), the amygdala,
the mammillothalamic tract, and the anterior thalamus.
FIGURE 12-18 Limbic System (A) In this contemporary conception of the limbic
system, an interconnected network of structures—the Papez circuit—controls emotional
expression.(B) A schematic representation, coded to brain areas shown in part A by
color, charts the limbic system’s major connections. (C) A reminder that parts A and B
can be conceptualized as part of a funnel rim of outputs that, through the hypothalamus,
produce emotional and motivated behavior.
Amygdala
Named for the Greek word for almond because of its shape, the
amygdala consists of three principal subdivisions, the corticomedial area,
the basolateral area, and the central area. Like the hypothalamus, the
amygdala receives inputs from all sensory systems. But in contrast with
the hypothalamic neurons, more complex stimuli are necessary to excite
amygdalar neurons. Indeed, many amygdalar neurons are multimodal :
they respond to more than one sensory modality. In fact, some respond to
the entire sensory array: sight, sound, touch, taste, and smell. These
amygdalar cells must shape a rather complex image of the sensory world.
Section 15-3 elaborates on multisensory integration and the binding
problem.
amygdala Almond-shaped collection of nuclei in the limbic system;
plays a role in emotional and species-typical behaviors.
The amygdala sends connections primarily to the hypothalamus and
the brainstem, where it influences neural activity associated with
emotions and species-typical behavior. For example, when the amygdala
of a person with epilepsy is electrically stimulated before brain surgery,
the person becomes fearful and anxious. We observed a woman who
responded with increased respiration and heart rate, saying that she felt as
if something bad was going to happen, although she could not specify
what.
Amygdala stimulation can also induce eating and drinking. We
observed a man who drank water every time the stimulation was turned
on. (There happened to be a pitcher of water on the table next to him.)
Within 20 minutes, he had consumed about 2 liters of water. When asked
if he was thirsty, he said, “No, not really. I just feel like drinking.”
The amygdala’s role in eating can be seen in patients with amygdalar
lesions. Like Roger as a result of his tumor, many of these patients lose
discrimination in their food choices, eating foods that were formerly
unpalatable to them. Lesions of the amygdala may also give rise to
hypersexuality.
Emotional Disorders
Major depression, a highly disruptive emotional disorder, is
characterized by some or all the following: prolonged feelings of
worthlessness and guilt, the disruption of normal eating habits, sleep
disturbances, a general slowing of behavior, and frequent thoughts of
suicide. A depressed person feels severely despondent for a long time.
Major depression is common in our modern world, with a prevalence of
about 6 percent of the population at any given time.
Major depression, detailed in Focus 6-3, is among the most treatable
psychological disorders. Cognitive and intrapersonal therapies are as
effective as drugs. See Section 16-4 .
phobia Fear of a clearly defined object or situation.
Depression has a genetic component. It not only runs in families but
also frequently tends to occur in both members of a pair of identical
twins. The genetic component in depression implies a biological
abnormality, but the cause remains unknown. However, neuroscience
researchers’ interest in the role of epigenetic changes in depression is
increasing. One hypothesis is that early life stress may produce
epigenetic changes in the prefrontal cortex (see the review by Schroeder
et al., 2010).
Excessive anxiety is an even more common emotional problem than
depression. Anxiety disorders, including posttraumatic stress disorder
(PTSD), phobias, generalized anxiety disorder, panic disorder, and
obsessive-compulsive disorder (OCD), are estimated to affect 15 percent
to 35 percent of the population. As described in Clinical Focus 12-3 ,
Anxiety Disorders, symptoms include persistent fears and worries in the
absence of any direct threat, usually accompanied by various
physiological stress reactions, such as rapid heartbeat, nausea, and
breathing difficulty.
CLINICAL FOCUS 12-3
Anxiety Disorders
Animals typically become anxious at times, especially when they are in
obvious danger. But anxiety disorders are different. They are
characterized by intense feelings of fear or anxiety inappropriate for the
circumstances. People with an anxiety disorder have persistent and
unrealistic worries about impending misfortune. They also tend to have
multiple physical symptoms attributable to hyperactivity of the
sympathetic nervous system.
G. B.’s case is a good example. He was a 36-year-old man with two
college degrees who began to have severe spells initially diagnosed as a
heart condition. He would begin to breathe heavily, sweat, develop
heart palpitations, and sometimes feel pains in his chest and arms.
During these attacks, he was unable to communicate coherently and
would lie helpless on the floor until an ambulance arrived to take him to
an emergency room.
Extensive medical testing and multiple attacks over about 2 years
eventually led to the diagnosis of generalized anxiety disorder. Like
most of the 5 percent of the U.S. population who have an anxiety
disorder at some point in their life, G. B. was unaware that he was
overly anxious. The cause of generalized anxiety is difficult to pinpoint,
but one likely explanation is related to the cumulative effect of general
stress.
Although G. B. appeared outwardly calm most of the time, he had
been a prodemocracy activist in communist Poland, a dangerous
position. Because of the dangers, he and his family eventually escaped
from Poland to Turkey, and from there they went to Canada. G. B. may
have had continuing worries about the repercussions of his political
activities—worries (and stress) that eventually found expression in
generalized anxiety attacks.
The most common and least disabling type of anxiety disorders are
phobias. A phobia pertains to a clearly defined, dreaded object (such as
spiders or snakes) or situation (such as enclosed spaces or crowds).
Most people have a mild aversion to some types of stimuli. Such
aversion becomes a phobia only when a person’s feelings about a
disliked stimulus lead to overwhelming fear and anxiety.
The incidence of disabling, that is, serious enough to interfere with
living well, phobias is surprisingly high—estimated to affect at least 1
in 10 people. Most people with a phobia control the emotional reaction
by avoiding what they dread. Others face their fears in controlled
settings, with the goal of overcoming them.
Panic disorder has an estimated incidence on the order of 3 percent
of the population. Symptoms include recurrent attacks of intense terror
that begin without warning and without any apparent relation to
external circumstances. Panic attacks usually last only a few minutes,
but the experience is always terrifying. Sudden activation of the
sympathetic nervous system leads to sweating, a wildly beating heart,
and trembling.
panic disorder Recurrent attacks of intense terror that come on
without warning and without any apparent relation to external
circumstances.
Although panic attacks may occur only occasionally, the victim’s
dread of another episode may be continual. Consequently, many people
with panic disorder also have agoraphobia, a fear of public places or
situations in which help might not be available. This phobia makes
some sense, because a person with a panic disorder may feel
particularly vulnerable about the possibility of having an attack in a
public place.
Freud believed that anxiety disorders are psychological in origin and
treatable with talking therapies in which people confront their fears.
Today, cognitive-behavioral therapies serve this purpose, as shown in
the accompanying photo. More recently, a behavioral therapy called
mindfulness, a form of meditation is proving effective in treating
anxiety disorders. Its effectiveness is correlated with suppressed
activity in the anterior cingulate region (Garrison et al., 2015). The
effect is greater in trained as opposed to novice meditators, which
supports the value of mindfulness training programs.
Pharmacologically, anxiety disorders are most effectively treated
with benzodiazepines such as diazepam (Valium), the best known.
Alprazolam (Xanax) is the most commonly prescribed drug for panic
attacks. Benzodiazepines act by augmenting GABA’s inhibitory effect
and are believed to exert a major influence on neurons in the amygdala.
Whether treatments are behavioral, pharmacological, or both, the
general goal is normalizing brain activity in the limbic system.
Lea Paterson/Science Source
Up to 90 percent of people with an animal phobia overcome their fears in a
single exposure therapy session that lasts 2 or 3 hours.
Nonregulatory Behavior
The two distinctly different types of motivated behaviors described in
Section 12-4 are regulatory behaviors, which maintain vital body system
balance, or homeostasis; and non-regulatory behaviors, those not
controlled by a homeostatic mechanism—basically all other behaviors.
In this section, we focus first on the control of two regulatory behaviors
in humans—eating and fluid intake. Then we explore the control of
human sexual behavior. While sexual behavior is nonregulatory; that is,
not essential for an individual organism’s survival, it is of enormous
psychological significance to humans.
Controlling Eating
Feeding behavior entails far more than sustenance alone. We must eat
and drink to live, but we also derive great pleasure from these acts. For
many people, eating is a focus of daily life, if not for survival, for its
centrality to social activities, from get-togethers with family and friends
to business meetings and even to group identification. Are you a
gourmet, a vegetarian, or a snack food junkie? Do you diet?
Control over eating is a source of frustration and even grief for many
people in the developed world. In 2000, the World Health Organization
identified obesity, the excessive accumulation of body fat, as a
worldwide epidemic. The United States is a case in point. From 1990 to
2010, the proportion of overweight people increased from about 50
percent to 65 percent of the population. The proportion of people
considered obese increased from about 12 percent in 1990 to 33 percent
in 2014.
obesity Excessive accumulation of body fat.
The increasing numbers of overweight and obese children and adults
persist despite a substantial decrease in fat intake in American diets.
What behaviors might cause persistent weight gain? One key to
understanding weight gain in the developed world is evolutionary. Even
40 years ago, much of our food was only seasonally available. In a world
with uncertain food availability, it makes sense to store excess body
calories in the form of fat to be used later when food is scarce. Down
through history and in many cultures today, plumpness was and is
desirable as a standard of beauty and a sign of health and wealth.
In postindustrial societies, where food is continuously and easily
available, overweight may not be the healthiest condition. People eat as
though food will be scarce and fail to burn off the extra calories by
exercising, and the result is apparent. About half of the U.S. population
has dieted at some point in their life. At any given time, at least 25
percent report that they are currently on a diet. For a comparison of how
some well-known dieting programs perform, see Clinical Focus 12-4 ,
Weight Loss Strategies, on page 428 .
Most Americans are overweight despite living in a culture obsessed
with slimness. The human control system for feeding has multiple
neurobiological inputs, including cognitive factors such as thinking
about food and the association between environmental cues (e.g.,
watching television or studying) and the act of eating. The constant
pairing of such cues with eating can result in the cues alone becoming a
motivation—an incentive to eat. We return to this phenomenon in the
discussion of reward and addiction in Section 12-6 .
Eating disorders entail being either overweight or underweight.
Anorexia nervosa is an eating disorder with a huge cognitive
component: self-image. A person’s body image is highly distorted in
anorexia. This misperception leads to an exaggerated concern with being
overweight. That concern spirals to excessive dieting, compulsive
exercising, and severe, potentially life-threatening weight loss. Anorexia
is especially prevalent among adolescent girls.
anorexia nervosa Exaggerated concern with being overweight that
leads to inadequate food intake and often excessive exercising; can
lead to severe weight loss and even starvation.
The neurobiological control of feeding behavior in humans is not as
simple as it is in the fly described in Section 12-3 . The multiple inputs to
the human control system for feeding come from three major sources:
the cognitive factors already introduced, the hypothalamus, and the
digestive system.
Digestive System and Control of Eating
As illustrated in Figure 12-23 , the digestive tract begins in the mouth
and ends at the anus. Digestion is controlled by the enteric nervous
system. As food travels through the tract, the digestive system extracts
three types of nutrients: lipids (fats), amino acids (the building blocks of
proteins), and glucose (sugar). Each nutrient is a specialized energy
reserve. Because we require varying amounts of these reserves
depending on what we are doing, the body has detector cells to keep
track of the level of each nutrient in the bloodstream.
Figure 2-31 diagrams the inner workings of the ENS and Section 5-
3 the main neurotransmitters it employs.
Glucose is the body’s primary fuel and virtually the only energy
source for the brain. Because the brain requires glucose even when the
digestive tract is empty, the liver acts as a short-term reservoir of
glycogen, a starch that acts as an inert form of glucose. When blood
sugar levels fall, as when we are sleeping, detector cells tell the liver to
convert glycogen into glucose for release into the bloodstream.
Thus the digestive system functions mainly to break down food, and
the body needs to be apprised of how this breakdown is proceeding.
Feedback mechanisms provide such information. When food reaches the
intestines, it interacts with receptors in the ENS to trigger the release of
at least 10 different peptide hormones, including cholecystokinin (CCK),
glucagonlike peptide 1 (GLP-1), and peptide YY (PYY). Each, by virtue
of its release as food, is absorbed and acts as a satiation or satiety signal
that inhibits food intake. For example, when CCK is infused into an
animal’s hypothalamus, the animal’s appetite diminishes.
CLINICAL FOCUS 12-4
Results
Conclusion: The VMH plays a role in controlling the cessation of
eating. Damage to the VMH results in prolonged and dramatic weight
gain.
Controlling Drinking
Dissolved in the water that is 70 percent of the human body are
chemicals that participate in the hundreds of reactions necessary to
bodily functions. Essential homeostatic mechanisms control water levels
(and hence chemical concentrations) within rather narrow limits. The
rate of a chemical reaction is partly determined by the concentration of
the participating chemicals.
As with eating, we drink for many reasons. We consume some
beverages, such as coffee, wine, beer, and juice, for an energy boost or to
relax, as part of social activities, or just because they taste good. We
drink water for its health benefits, to help wash down a meal or to
intensify the flavor of dry foods. On a hot day, we drink water because
we are thirsty, presumably because we become dehydrated through
sweating and evaporation.
These examples illustrate the two kinds of thirst. Osmotic thirst
results from increased concentrations of dissolved chemicals, known as
solutes, in the body fluids. Hypovolemic thirst results from a loss of
overall fluid volume from the body.
osmotic thirst Thirst that results from a high concentration of
dissolved chemicals, or solutes, in body fluids.
hypovolemic thirst Thirst produced by a loss of overall fluid
volume from the body.
Osmotic Thirst
Solutes found inside and outside cells are ideally concentrated for the
body’s chemical reactions. Maintaining this concentration requires a kind
of homeostat, much like the mechanism that controls body temperature.
Deviations from the ideal solute concentration activate systems to
reestablish it.
When we eat salty foods, such as potato chips, the salt (NaCl) spreads
through the blood and enters the extracellular fluid between our cells.
This shifts the solute concentration away from the ideal. Receptors in the
hypothalamus along the third ventricle detect the altered solute
concentration and relay the message too salty to various hypothalamic
areas that in turn stimulate us to drink. Other messages are sent to the
kidneys to reduce water excretion.
Turning to sugar-sweetened beverages to quench thirst from eating
salty foods increases the likelihood of weight gain.
Water Intoxication
Eating too much leads to obesity. What happens when we drink too
much water? Our kidneys are efficient at processing water, but if we
drink a large volume all at once, the kidneys cannot keep up.
The result is a condition called water intoxication. Body tissues swell
with the excess fluid, essentially drowning the cells in freshwater. At the
same time, the relative concentration of sodium drops, leading to an
electrolyte imbalance.
Water intoxication can produce widely ranging symptoms, from
irregular heartbeat to headache. In severe cases, people may act as
though they are drunk. The most likely way for an adult to develop water
intoxication is to sweat heavily, by running a marathon in hot weather,
for example, then drink too much water without added electrolytes.
Hypovolemic Thirst
Unlike osmotic thirst, hypovolemic thirst arises when the total volume of
body fluids declines, motivating us to drink more and replenish them. In
contrast with osmotic thirst, however, hypovolemic thirst encourages us
to choose something other than water, because water would dilute the
solute concentration in the blood. Rather, we prefer to drink flavored
beverages that contain salts and other nutrients.
Hypovolemic thirst and its satiation are controlled by a hypothalamic
circuit different from the one that controls osmotic thirst. When fluid
volume drops, the kidneys send a hormone signal (angiotensin) that
stimulates midline hypothalamic neurons. These neurons, in turn,
stimulate drinking.
© Neal Preston/Corbis
Peter Brooker/Rex Features/Alamy
NC P/Star Max/Getty Images
Before accepting the Arthur Ashe Courage Award at the 2015 ESPY
ceremony, Caitlyn Jenner was known worldwide as Bruce Jenner. Bruce
won Olympic Gold in the 1976 decathlon competition. Caitlyn began her
physical transition nearly four decades later. At left, Bruce in 1984 and at
center in 2003; at right, Caitlyn in 2015.
The general conclusions from these studies and from related studies
of gamblers are that neural reward pathways are diffuse and that the
pathways’ size and activity are related to the reward’s intensity. The hope
is that rs-fMRI can be used as a biomarker for both severity of addiction
and efficacy of treatment. But the extent of addiction-related changes
suggests that eliminating nicotine addiction is not easy—a conclusion
many nicotine addicts would agree with.
12-6 REVIEW
Reward
Before you continue, check your understanding.
1 . Animals engage in voluntary behaviors because the behaviors are
____________.
2 . Neural circuits maintain contact with rewarding environmental
stimuli in the present or in the future through ____________ and
____________ subsystems.
3 . The neurotransmitter systems hypothesized to be basic to reward are
___________, ___________, and systems.
4 . What is intracranial self-stimulation, and why is it rewarding?
Answers appear at the back of the book.
Neuroanatomy of Motivated
12-4
and Emotional Behavior
The neural structures that initiate emotional and motivated behaviors are
the hypothalamus, pituitary gland, amygdala, the dopaminergic and
noradrenergic activating pathways from nuclei in the lower brainstem,
and the frontal lobes.
The experience of both emotion and motivation is controlled by
activity in the ANS, hypothalamus, and forebrain, especially the
amygdala and frontal cortex. Emotional and motivated behavior may be
unconscious responses to internal or external stimuli controlled either by
the activity of innate releasing mechanisms or by cognitive responses to
events or thoughts.
12-6 Reward
Survival depends on maximizing contact with some environmental
stimuli and minimizing contact with others. The reward mechanism
controls this differential. Two independent features of reward are
wanting and liking. The wanting component is thought to be controlled
by dopaminergic activating systems, whereas the liking component is
thought to be controlled by opioid and GABA–benzodiazepine systems.
KEY TERMS
amygdala, p. 418
androgen, p. 401
anorexia nervosa, p. 427
aphagia, p. 429
emotion, p. 399
evolutionary psychology, p. 406
gender identity, p. 435
generalized anxiety disorder, p. 424
hippocampus, p. 416
homeostatic mechanism, p. 411
hyperphagia, p. 429
hypovolemic thirst, p. 431
innate releasing mechanism (IRM), p. 406
Klüver–Bucy syndrome, p. 423
learned taste aversion, p. 409
medial forebrain bundle (MFB), p. 413
motivation, p. 399
nonregulatory behavior, p. 413
obesity, p. 427
orbitofrontal cortex (OFC), p. 403
osmotic thirst, p. 431
panic disorder, p. 424
pheromone, p. 403
phobia, p. 424
pituitary gland, p. 413
prefrontal cortex (PFC), p. 418
preparedness, p. 409
psychosurgery, p. 423
regulatory behavior, p. 411
reinforcer, p. 409
releasing hormone, p. 414
sensory deprivation, p. 401
sexual dimorphism, p. 433
sexual orientation, p. 435
somatic marker hypothesis, p. 421
transgender, p. 435
13
© AntonVengo/SuperStock
Earth’s axis is tilted slightly, so as it orbits the sun once each year, the
North and South Poles incline slightly toward the sun for part of the year
and slightly away from it for the rest of the year. As the Southern
Hemisphere inclines toward the sun, its inhabitants experience summer:
more direct sunshine for more hours each day, and the weather is
warmer. At the same time inhabitants of the Northern Hemisphere,
inclined away from the sun, experience winter: less direct sunlight,
making the days shorter and the weather colder. Over the year the polar
inclinations reverse, as do the seasons. Tropical regions near the equator
undergo little seasonal or day length change as Earth progresses around
the sun.
Daily and seasonal changes have combined effects on organisms,
inasmuch as the onset and duration of daily change depend on the season
and latitude. Animals living in polar regions have to cope with greater
seasonal fluctuations in daily temperature, light, and food availability
than do animals living near the equator.
We humans largely evolved as equatorial animals, and our behavior is
dominated by a circadian rhythm of daylight activity and nocturnal sleep.
Nevertheless our daily cycles adapt to extreme latitudes. Not only does
human waking and sleep behavior cycle daily; so also do pulse rate,
blood pressure, body temperature, rate of cell division, blood cell count,
alertness, urine composition, metabolic rate, sexual drive, feeding
behavior, and responsiveness to medications. The activity of nearly every
cell in our bodies shares a daily rhythm.
Biorhythms are not unique to animals. Plants display rhythmic
behavior exemplified by species whose leaves or flowers open during the
day and close at night. Even unicellular algae and fungi display rhythmic
behaviors related to the passage of the day. Some animals, including
lizards and crabs, change color in a rhythmic pattern. The Florida
chameleon, for example, turns green at night, whereas its coloration
matches its environment during the day. In short, almost every living
organism and every living cell displays rhythms related to daily changes
(Bosler et al., 2015).
Biological Clocks
If animal behavior were affected only by daily changes in external cues,
the neural mechanisms that account for changes in behavior would be
simple to study. An external cue—say, sunrise—could be isolated and
the neural processes that respond to the cue identified.
EXPERIMENT 13-1
Leaf up
Leaf down
Bryan Kolb/Ian Whishaw
Ultradian rhythms have a period of less than one day. Our eating
behavior, which takes place about every 90 minutes to 2 hours, including
snacks, is one ultradian rhythm. Rodents, although active throughout the
night, display an ultradian rhythm in being most active at the beginning
and end of the dark period.
Biological rhythm Time frame Example
The fact that a behavior appears to be rhythmic does not mean that it is
ruled only by a biological clock. Animals may postpone migrations as
long as food supplies last. They adjust their circadian activities in
response to the availability of food, the presence of predators, and
competition from other members of their own species. We humans
obviously change our daily activities in response to seasonal changes,
work schedules, and play opportunities. Therefore, whether a rhythmic
behavior is produced by a biological clock and the extent to which it is
controlled by a clock must be demonstrated experimentally.
Free-Running Rhythms
To determine whether a rhythm is produced by a biological clock,
researchers design three types of tests in which they manipulate relevant
cues, especially light cues. A test is given (1) in continuous light, (2) in
continuous darkness, or (3) by choice of the participant. Each treatment
yields a slightly different insight into the periods of biological clocks.
Jurgen Aschoff and Rutger Weber first demonstrated that the human
sleep–waking rhythm is governed by a biological clock. They allowed
participants to select their light–dark cycle and studied them in an
underground bunker, where no cues signaled when day began or ended.
The participants selected the periods when they were active and when
they slept, and they turned the lights on and off at will. In short they
selected the length of their own day and night.
Measures of ongoing behavior and recording of sleep periods with
sensors on the beds revealed that the participants continued to show daily
sleep–activity rhythms. This finding demonstrates that humans have an
endogenous biological clock that governs sleep–waking behavior. Figure
13-3 shows, however, that the biorhythm is different when compared with
biorhythms before and after isolation. Although the period of the
participants’ sleep–wake cycles approximated 24 hours before and after
the test, during the test they lengthened to about 25 to 27 hours,
depending on the person.
Zeitgebers
Endogenous rhythmicity is not the only factor that contributes to
circadian periods. A mechanism exists for setting rhythms to correspond
to environmental events as well. To be useful, the biological clock must
keep to a time that predicts actual changes in the day–night cycle. If a
biological clock is like a slightly defective wristwatch, it will eventually
provide times that are inaccurate by hours and so be useless.
If we reset an errant wristwatch each day, however—say, when we
awaken—it provides useful information even though it is not perfectly
accurate. Equivalent ways of resetting a free-running biological clock
include sunrise and sunset, eating times, and many other activities that
influence the period of the circadian clock.
Aschoff and Weber called a clock-setting cue a Zeitgeber (time giver
in German). When a Zeitgeber resets a biorhythm, the rhythm is said to
be entrained. Light is the most potent entraining stimulus. Clinical Focus
13-2 , Seasonal Affective Disorder, explains its importance in entraining
circadian rhythms.
Zeitgeber Environmental event that entrains biological rhythms:
German for time giver.
entrain Determine or modify the period of a biorhythm.
If a hamster happens to blink during this Zeitgeber, the light will still
penetrate its closed eyelids and entrain its biological clock.
CLINICAL FOCUS 13-2
Keeping Time
If SCN neurons are isolated from one another, each remains rhythmic,
but the period of some cells differs from that of other cells. Thus rhythmic
activity is a property of SCN cells, but the timing of the rhythm must be
set so that the cells can synchronize their activity in relation to each other.
In the brain, SCN cells connect one to another through inhibitory GABA
synapses, and these connections allow them to act in synchrony. Their
entrainment depends upon external inputs, however.
GABA is main inhibitory neurotransmitter in the CNS.
The SCN receives information about light through the
retinohypothalamic tract ( Figure 13-6 ). This pathway begins with
specialized retinal ganglion cells (RGCs) that contain the photosensitive
pigment melanopsin. These melanopsin-containing photosensitive RGCs
receive light-related signals from the rods and cones and send that
information to the brain’s visual centers. Melanopsin-containing pRGCs
also can be activated directly by certain wavelengths of blue light in the
absence of rods and cones.
retinohypothalamic tract Neural route formed by axons of
photosensitive retinal ganglion cells from the retina to the
suprachiasmatic nucleus; allows light to entrain the rhythmic activity
of the SCN.
Section 9-2 traces three main routes from the retina to the visual
brain. Figure 9-8 diagrams the retina’s cellular structure.
Melanopsin-containing photosensitive RGCs are distributed across the
retina, and in humans they make up between 1 percent and 3 percent of
all RGCs. Their axons project to various brain regions, including the
SCN, which they innervate bilaterally. Melanopsin-containing ganglion
cells use glutamate as their primary neurotransmitter but also contain two
cotransmitters, substance P and pituitary adenylate cyclase–activating
polypeptide (PACAP).
Glutamate is the main excitatory neurotransmitter in the CNS.
When stimulated by light, melanopsin-containing pRGCs are excited,
and in turn they excite cells in the SCN. The existence of light-sensitive
retinal ganglion cells that are involved in entraining the circadian rhythm
explains the continued presence of an entrained rhythm in people who are
blind as a result of retinal degeneration that destroys the rods and cones
(Zaidi et al., 2007). Even so, melanopsin-containing pRGCs do receive
inputs from cones and rods. Cones can influence their activity in bright
daylight, and rods can influence their activity in dim light.
As illustrated in Figure 13-6 , the SCN consists of two parts, a more
ventrally located core and a more dorsally located shell. The
retinohypothalamic tract activates the core cells. Core neurons are not
rhythmic, but they entrain the shell neurons, which are rhythmic.
In addition to retinohypothalamic input, the SCN receives projections
from other brain regions, including the intergeniculate leaflet in the
thalamus and the raphe nucleus, which is the nonspecific serotonergic-
activating system of the brainstem. The terminal regions of these inputs to
the SCN display variations, suggesting that various portions of the shell
and the core have somewhat different functions.
The SCN’s circadian rhythm is usually entrained by morning and
evening light, but it can also be entrained or disrupted by sudden changes
in lighting, by arousal, by moving about, and by feeding. These
influences differ from the light entrainment provided over the
retinohypothalamic tract.
The intergeniculate leaflet and the raphe nucleus are pathways through
which nonphotic events influence the SCN rhythm (Cain et al., 2007).
The neural structures mediating these other entraining pathways explain
why being aroused or eating during the sleep portion of the circadian
cycle disrupts the cycle (Mistlberger & Antle, 2011).
It is likely that the regional variation in SCN shell anatomy provides
the substrate for various rhythms of the circadian cycle. Findings from
studies on the genes that control rhythms in fruit flies suggest two
separate groups of circadian neurons. M cells control morning activity;
they need morning light for entrainment. E cells control evening activity;
they need onset of darkness for entrainment (Hughes et al., 2015).
Some people are early to bed and early to rise and are energetic in the
morning. Other people are late to rise and late to bed and are energetic in
the evening. Individual differences in circadian activity between these
“lark” and “owl” chronotypes are due to differences in SCN shell
neurons. The differences may be the equivalent of fruit fly M cells and E
cells, When expressed differently in people, genes may account for
differences in the amplitude of phases in the circadian period. (Pellegrino
et al, 2015). Whether students are larks or owls does affect their
performance on cognitive tasks, possibly by affecting memory for content
(Smarr, 2015).
chronotype Individual differences in circadian activity.
Suprachiasmatic lesion
Suprachiasmatic transplant
Preschool 10-13
Adolescent 8-10
Adult 7-9
Deep sleep
Relaxed State
When participants relax and close their eyes, they may produce the alpha
( a) rhythm —large, extremely regular brain waves with a frequency
ranging from 7 to 11 Hz. Humans generate alpha rhythms in the region of
the visual cortex at the back of the brain, and the rhythms abruptly stop if
a relaxed person is disturbed or opens his or her eyes. Not everyone
displays alpha rhythms, and some people display them much better than
others.
Drowsy State
When a person grows drowsy, the EEG indicates that beta wave activity
in the neocortex gives way to slower EEG wave activity. The amplitude
of the EEG waves increases and their frequency becomes a slower theta (
θ) rhythm of 4- to 7-Hz waves. Concurrently the EMG remains active, as
the muscles have tone, and the EOG indicates that the eyes are not
moving.
Sleeping State
As participants enter deeper sleep, they produce yet slower, larger EEG
waves called delta (δ) rhythms. Delta rhythm has a frequency of 1 to 3
Hz and is associated with the loss of consciousness that characterizes
sleep. This stage is sometimes called slow-wave sleep. Still, the EMG
indicates muscle activity, signifying that the muscles retain tone, although
the EOG indicates that the eyes do not move.
delta ( δ ) rhythm Slow brain wave activity pattern associated with
deep sleep.
slow-wave sleep NREM sleep.
REM and NREM Sleep Phases
Sleep consists of periods when a sleeper is relatively still and periods
when the mouth, fingers, and toes twitch. This behavior is readily
observed in household pets and bed partners. In 1955 Eugene Aserinsky
and Nathaniel Kleitman (Lamberg, 2003), working at the University of
Chicago, observed that the twitching is periodic and is also associated
with rapid eye movements (REM). Other than twitches and eye
movements, the EMG indicates that muscles are inactive, a condition
termed atonia (Greek via Latin, “without tone”).
atonia Lacking tone; condition of complete muscle inactivity
produced by motor neuron inhibition.
By accumulating and analyzing REM recorded on EEGs, the Chicago
investigators were the first to identify REM sleep, a fast-wave sleep
period whose pattern is reminiscent of a waking EEG (see Dement,
1972). This discovery led to the contemporary naming of two sleep states,
REM and NREM. The delta rhythm sleep period, during which the EEG
pattern is slow and large and the EOG is inactive, is called NREM (for
non-REM) sleep to distinguish it from REM sleep.
REM sleep Fast brain wave pattern displayed by the neocortical
EEG record during sleep.
Other Sleep- and Waking-Related Activities
Although electrophysiological measures define the sleep stages, many
other bodily and physiological events are associated with waking and
sleep stages. Measures of metabolic activity, such as body temperature,
generally decline during sleep. Breathing and heart rate provide further
insights into waking and sleeping. The sleeper’s behavior—tossing and
turning, moaning and laughing—also occur during specific sleep stages.
Taken together, these measures yield insights into the many causes and
symptoms of normal sleep and disturbed sleep.
NREM (non-REM) sleep Slow-wave sleep associated with delta
rhythms.
(B) Sleep
FIGURE 13-12 Sleep Recording and Revelations (A) EEG
patterns associated with waking with the four NREM sleep stages and with
REM sleep. (B) Over a typical night’s sleep, a person undergoes several
sleep state changes in roughly 90-minute periods. NREM sleep dominates
the early sleep periods, and REM sleep dominates later sleep. The
duration of each sleep stage is reflected in the thickness of each bar, which
is color-coded to the corresponding stage in part A. The depth of each
stage is graphed as the relative length of the bar. Information from D. D. Kelley
(1991). Sleep and Dreaming (p. 794). In E. R. Kandel, J. H. Schwartz, & T. M. Jessell (Eds.).
Principles of Neuroscience. New York: Elsevier.
REM sleep is no less eventful than NREM sleep. During REM sleep our
eyes move; our toes, fingers, and mouths twitch; and males have penile
erections. Still, we are paralyzed, as indicated by atonia. This absence of
muscle tone stems from motor neuron inhibition by sleep regions of our
brainstem. In the sleep lab atonia is recorded on an electromyogram as the
absence of muscle activity (see Figure 13-11 B).
Posture is not completely lost during NREM sleep. You can get an idea
of NREM sleep posture by observing a cat or dog sleeping. During NREM
they may be lying down but still have some posture, with the head partly
supported in a partially upright position. At the onset of REM sleep the
animal usually subsides into a sprawled position as muscle paralysis sets in.
Figure 13-14 illustrates the sleep postures of a horse. Horses can sleep
while standing up by locking their knee joints, and they can sleep while
lying down with their head held slightly up. At these times they are in
NREM sleep. When they are completely sprawled out, they are in REM
sleep.
During REM sleep mammals’ limbs twitch visibly, and if you look
carefully at the face of a dog or cat, you will also see the skin of the snout
twitch and the eyes move behind the eyelids. It might seem strange that an
animal that is paralyzed can make small twitching movements, but the
neural pathways that mediate these twitches obviously are spared the
paralysis.
One explanation for the twitching of eyes, face, and distal parts of the
limbs is that such movements help to maintain blood flow in those parts of
the body. Another explanation is that the brain is developing coordinated
movements and tuning the neural circuits that support those movements—
an activity especially important to infants, who have not yet developed full
motor control (Blumberg, 2015).
An additional change resulting from atonia during REM sleep is that
mechanisms that regulate body temperature stop working and body
temperature moves toward room temperature. You may wake up from REM
sleep feeling cold or hot (or because you are cold or hot), depending on the
temperature of the room, because your body has drifted toward room
temperature during a REM period.
FIGURE 13-14 Nap Time Horses usually seek open, sunny areas for a brief
sleep. I. Q. W.’s horse Lady Jones illustrates three sleep postures. Top:
NREM sleep, standing with legs locked and head down. Center: NREM
sleep, lying down with head up. Bottom: REM sleep, in which all postural and
muscle tone is lost.
Dreaming
The most remarkable aspect of REM sleep—dreaming—was discovered by
William Dement and Nathaniel Kleitman in 1957 (Dement, 1972). When
participants were awakened from REM sleep, they reported that they had
been having vivid dreams. In contrast, participants aroused from NREM
sleep were much less likely to report that they had been dreaming, and the
dreams they did report were much less vivid. The technique of electrical
recording from a sleeping participant in a sleep laboratory made it possible
to subject dreams and dreaming to experimental analysis. Such studies
provided objective answers to interesting questions concerning dreaming.
They also raised many more questions, because the electrical events of
sleep are complex and difficult to associate with the content of specific
dreams (Nir & Tononi, 2010).
How often do we dream? Reports by people on their dreaming behavior
once suggested that dreaming was quite variable: some reported that they
dreamed frequently and others that they never dreamed. Waking
participants up during REM periods showed that everyone dreams, that they
dream several times each night, and that dreams last longer as a sleep
session progresses. Those who claimed not to dream presumably forgot
their dreams. Perhaps people forget their dreams because they do not wake
up during a dream or immediately afterward, which allows subsequent
NREM sleep activity to erase the memory of the dream. Perhaps, too, the
neural systems that store our daily memories are inactive during NREM
sleep.
How long do dreams last? Common wisdom once suggested that dreams
last but an instant. By waking people up at different intervals after the onset
of REM sleep and matching the reported dream content to the previous
duration of REM sleep, however, researchers demonstrated that dreams
appear to take place in real time. An action that a person performs in a
dream lasts about as long as it would take to perform while awake. It is
likely that time shrinking is a product of remembering a dream, just as time
shrinking is a feature of our recall of other memories.
In contrast with the threat interpretation of dreams and from their own
analysis of dream content, Malcolm-Smith and her coworkers (2012) report
that approach behavior occurs more frequently in dreams than does
avoidance behavior. They therefore suggest that reward-seeking behavior is
as likely to represent a dream’s latent content as avoidance behavior is.
An extension of the top-down approach to dream interpretation contends
that people are problem solvers when awake, and problem solving
continues during sleep (Edwards et al., 2013). Have you ever been advised
to sleep on it? Did that prove to be good advice?
Part of the challenge in studying dreams is that they occur throughout
our sleep–waking behavior. Dreams occur during NREM sleep but not as
vividly as in REM sleep. We can have dreamlike experiences just as we
drop off to sleep, as our ongoing thoughts seem to disintegrate into
hallucinations. Sometimes we are aware of our dreams as we dream, a
phenomenon called lucid dreaming. People who have vivid dreamlike
experiences when awake are said to be hallucinating. And of course we
daydream when awake. Eric Klinger (1990) suggests that daydreams are
ordinary and often fun, with little of the turmoil of REM dreams, and so
seem the true opposite of night dreams.
Much about dreams is not understood. Very young children spend a lot
of time in REM sleep yet do not report complex dreams filled with emotion
and conflict. Children may experience brief frightening dreams called night
terrors during NREM sleep. Night terrors can be so vivid that the child
continues to experience the dream and the fear after awaking. A 4-year-old
child suddenly woke up screaming that she was covered in ants. It took
hours for her father to convince her that her experience was not real and
that she could go back to bed. Only reassurance from a sleep expert later
convinced the father that nothing was amiss with his daughter and that night
terrors are common among young children (Carter et al., 2014).
13-3 REVIEW
Sleep Stages and Dreaming
Before you continue, check your understanding.
1 . Sleep consists of two phases: ___________, which stands for
___________, and ___________, which stands for ___________.
2 . REM sleep is characterized by eye movement, as recorded by the
___________; atonia, recorded by the ___________; and waking
activity, recorded by the.
3 . Sleepers experience about ___________ REM sleep periods each
night, with each period ___________ as sleep progresses.
4 . Evidence from sleep lab analysis suggests that ___________ dreams
and that dreams take place in ___________.
5 . What major factor makes interpreting dreams difficult?
Answers appear at the back of the book.
Reaction-time task
Sleep Apnea
The first time I went to a doctor for my insomnia, I was twenty-
five—that was about thirty years ago. I explained to the doctor
that I couldn’t sleep; I had trouble falling asleep, I woke up many,
many times during the night, and I was tired and sleepy all day
long. As I explained my problem to him, he smiled and nodded.
Inwardly, this attitude infuriated me—he couldn’t possibly
understand what I was going through. He asked me one or two
questions: Had any close friend or relative died recently? Was I
having any trouble in my job or at home? When I answered no, he
shrugged his shoulders and reached for his prescription pad. Since
that first occasion I have seen I don’t know how many doctors,
but none could help me. I’ve been given hundreds of different
pills—to put me to sleep at night, to keep me awake in the
daytime, to calm me down, to pep me up—have even been
psychoanalyzed. But still I cannot sleep at night. (In Dement,
1972, p. 73)
When this patient entered the Stanford University Sleep Disorders
Clinic in 1972, recording electrodes monitored his brain, muscle,
eye, and breathing activity while he slept (see Figure 13-11 ). The
attending researchers were amazed to find that he had to wake up to
breathe. They observed that he would go more than a minute without
breathing before he woke up, gasped for breath, and returned to
sleep. Then the sequence began again.
Sleep apnea may be produced by a CNS problem, such as weak
neural command to the respiratory muscles, or it may be obstructive,
caused by collapse of the upper airway. When people with sleep
apnea stop breathing, they either wake up completely and have
difficulty getting back to sleep or they partially awaken repeatedly
throughout the night to gasp for breath.
Sleep apnea affects people of all ages and both sexes, and 30
percent of those older than 65 may have some form of it. Sleep apnea
can even occur in children; it may be related to some cases of sudden
infant death syndrome (SIDS), or crib death, in which otherwise
healthy infants inexplicably die in their sleep. Sleep apnea is thought
to be more common among overweight people and those who snore,
two conditions in which airflow is restricted.
Breathing rate and blood oxygen level recorded during REM sleep
from a person with sleep apnea. Blood oxygen increased after each
breath, then continued to fall until another breath was taken. This
person inhaled only 4 times in the 6-minute period; a healthy sleeper
would breathe more than 60 times in the same interval.
Sleep Disorders
Before you continue, check your understanding.
1 . Disorders of NREM sleep include ___________, in which a person
has difficulty falling asleep at night, and ___________, in which a
person falls asleep involuntarily in the daytime.
2 . Treating insomnia with sleeping pills, usually sedative-hypnotics,
may cause ___________: progressively higher doses must be taken to
achieve sleep.
3 . Disorders of REM sleep include ___________, in which a person
awakens but cannot move and is afraid, and ___________, in which a
person may lose all muscle tone and collapse while awake.
4 . The people who act out their dreams, a condition termed
___________, may have damage to the ___________ nucleus.
5 . Is orexin the substance that produces waking?
Answers appear at the back of the book.
Consciousness?
René Descartes conceived his idea of a mind through a lucid dream. He
dreamed that he was interpreting the dream as it occurred. Later, when
awake, he reasoned that if he could think and analyze a dream while
asleep, his mind must function during both waking and sleeping. He
proposed therefore that the mind must be independent of the body that
undergoes sleeping and waking transitions. Contemporary fMRI studies
suggest that lucid dreaming is especially common in people who display
high levels of prefrontal cortex activity in Brodmann’s areas 9 and 10
(Filevich et al., 2015).
Section 1-2 recounts how Descartes chose the pineal gland as the
seat of the mind.
As described in preceding sections, what we colloquially refer to as
waking comprises at least three states. First, alert consciousness without
accompanying movement is associated with cholinergic system activity.
Second, consciousness with movement is associated with serotonergic
system activity. Third, the peptide orexin also plays a role in maintaining
waking activity.
Similarly, sleep consists of NREM and REM phases. NREM sleep
consists of the four stages indicated by the EEG (see Figure 13-12 A).
REM sleep periods consist of at least two stages, one in which small
twitching movements occur and one in which such motion twitching is
absent.
REM neurobehavioral events can occur relatively independently.
Sleepers may awake to find themselves in a condition of sleep paralysis,
during which they experience the hallucinations and fear common in
dreams. People who are awake may fall into a state of cataplexy: they are
conscious of being awake during the atonia and visual and emotional
features of dreams.
Sleep researcher J. Allan Hobson reported his peculiar symptoms
after a brainstem stroke (Hobson, 2002). For the first 10 days after the
lesion, he had complete insomnia, neither REM nor NREM sleep.
Whenever he closed his eyes, however, he did have visual hallucinations
that had a dreamlike quality. This experience suggested that eye closure
is sufficient to produce the visual components of REM sleep but with
neither loss of consciousness nor atonia. Hobson eventually recovered
typical sleeping patterns, and the hallucinations stopped.
Beyond teaching us that the neural basis of consciousness is
extremely complex, the study of sleep states and dreaming may help to
explain some psychiatric and drug-induced conditions. For example
visual and auditory hallucinations are among the symptoms of
schizophrenia. Are these hallucinations dream events that occur
unexpectedly during waking? Many people who take hallucinogenic
drugs such as LSD report visual hallucinations. Does the drug initiate the
visual features of dreams? People who have panic attacks suffer from
very real fright that has no obvious cause. Are they experiencing the fear
attacks that commonly occur during sleep paralysis and cataplexy?
What the study of sleep tells us about consciousness is that a
remarkable number of variations of conscious states exist. Some are
associated with waking and some with sleeping, and the two can mix
together to produce a variety of odd conditions. When it comes to
consciousness, there is far more to sleeping and waking than just
sleeping and waking.
Section 15-7 explores the neural basis of consciousness and ideas
about why humans are conscious.
SUMMARY
13-1 A Clock for All Seasons
Biorhythms are cyclic behavior patterns of varying length displayed by
animals, plants, even single-celled organisms. Biorhythms displayed by
mammals include, among others, circadian (daily) rhythms and
circannual (yearly) rhythms. In the absence of environmental cues
circadian rhythms are free-running, lasting a little more or a little less
than their usual period of about 24 hours depending on the individual
organism or the environmental conditions. Cues called Zeitgebers reset
biological clocks to a 24-hour rhythm. Circadian rhythms allow us to
synchronize our behavior with our body’s metabolic processes—so that
we are hungry, and at optimal times. Environmental intrusions into our
natural circadian rhythm, from artificial lighting to jet lag, contribute to
metabolic syndrome. Biological clocks produce epigenetic effects: they
regulate gene expression in every cell in the body.
13-2 Neural Basis of the Biological Clock
A biological clock is a neural structure responsible for producing
rhythmic behavior. Our master biological clock is the suprachiasmatic
nucleus. The SCN is responsible for circadian rhythms; it has its own
free-running rhythm with a period of a little more or a little less than 24
hours. Stimuli from the environment, such as sunrise and sunset, meals,
or exercise, entrain the free-running rhythm so that its period
approximates 24 hours.
SCN neurons are active in the daytime and inactive at night. These
neurons retain their rhythmicity when disconnected from other brain
structures, when removed from the brain and cultured in a dish, and after
culture in a dish for many generations. When reimplanted in a brain
without an SCN, they restore the animal’s circadian rhythms. Aspects of
neuronal circadian rhythms, including their period, are under genetic and
epigenetic control.
13-3 Sleep Stages and Dreaming
Sleep events are measured by recording the brain’s activity to produce an
electroencephalogram (EEG), muscular activity to produce an
electromyogram (EMG), and eye movements to produce an
electrooculogram (EOG).
A typical night’s sleep, as indicated by physiological measures,
consists of stages that take place in cycles over the course of the night.
During REM sleep the EEG displays a waking pattern and the sleeper
displays rapid eye movements. Sleep stages in which the EEG has a
slower rhythm are called non-REM (NREM) sleep.
Intervals of NREM sleep and REM sleep alternate four or five times
each night. The duration of NREM sleep periods is longer earlier in
sleep, whereas the duration of REM sleep periods is longer in the later
part of sleep. These intervals also vary with age.
A sleeper in NREM has muscle tone, may toss and turn, and has
dreams that are not especially vivid. A sleeper in REM sleep has vivid
dreams in real time but has no muscle tone and so is paralyzed. Dream
duration coincides with the duration of the REM period.
The activation–synthesis hypothesis proposes that dreams are not
meaningful, merely a by-product of the brain’s state of excitation during
REM. The coping hypothesis suggests that dreaming evolved as a
mechanism to deal with challenges and fears posed by life.
13-4 What Does Sleep Accomplish?
Several theories of sleep have been advanced, but the main proposition is
that sleep is a biological adaptation that conserves energy. Sleep is
suggested as a restorative process that fixes wear and tear in the brain
and body. Sleep also organizes and stores memories.
13-5 Neural Bases of Sleep
Separate neural regions are responsible for NREM and REM sleep. The
reticular activating system, located in the central brainstem, is
responsible for NREM sleep. If the RAS is stimulated, a sleeper awakes;
if it is damaged, a person may enter a coma.
The peribrachial area and the medial pontine reticular formation in the
brainstem are responsible for REM sleep. If these areas are damaged,
REM sleep may no longer occur. Pathways projecting from these areas to
the cortex produce the cortical activation of REM, and those projecting
to the brainstem produce the muscular paralysis of REM.
13-6 Sleep Disorders
Disorders of NREM sleep include insomnia, the inability to sleep at
night, and narcolepsy, inconveniently falling asleep in the daytime.
Sedative-hypnotics used to induce sleep may induce drug dependence
insomnia, a sleep disorder in which progressively larger doses are
required to produce sleep.
Disorders of REM sleep include sleep paralysis, in which a person
awakens but remains unable to move and sometimes feels fear and dread.
In cataplexy, caused by a loss of orexin cells in the brain, a person
collapses into a state of paralysis while awake. At the same time the
person may have hypnogogic hallucinations similar to dreaming. In
REM sleep behavioral disorder a sleeping person acts out dreams.
13-7 What Does Sleep Tell Us about
Consciousness?
Sleep research provides insight into consciousness by revealing many
kinds of waking and sleeping. Just as the events of wakefulness intrude
into sleep, the events of sleep can intrude into wakefulness. The array of
conditions thus produced demonstrates that consciousness is not a
unitary state.
KEY TERMS
atonia, p. 458
basic rest–activity cycle (BRAC), p. 464
beta (β) rhythm, p. 457
biological clock, p. 443
biorhythm, p. 443
cataplexy, p. 474
chronotype, p. 450
circadian rhythm, p. 443
coma, p. 470
delta (δ) rhythm, p. 458
dimer, p. 452
diurnal animal, p. 443
drug dependence insomnia, p. 473
entrain, p. 446
free-running rhythm, p. 446
hypnogogic hallucination, p. 474
insomnia, p. 473
jet lag, p. 448
light pollution, p. 446
medial pontine reticular formation (MPRF), p. 470
melatonin, p. 454
metabolic syndrome, p. 443
microsleep, p. 466
narcolepsy, p. 473
NREM (non-REM) sleep, p. 458
peribrachial area, p. 470
period, p. 445
place cell, p. 468
REM sleep, p. 458
reticular activating system (RAS), p. 470
retinohypothalamic tract, p. 450
sleep apnea, p. 473
sleep paralysis, p. 474
slow-wave sleep, p. 458
suprachiasmatic nucleus (SCN), p. 450
Zeitgeber, p. 446
14
Remediating Dyslexia
As children absorb their society’s culture, acquiring language skills
seems virtually automatic. Yet some people face lifelong challenges
in mastering language-related tasks. Educators classify these
difficulties under the umbrella of learning disabilities.
Dyslexia, impairment in learning to read, may be the most
common learning disability. Children with dyslexia (from Greek
words suggesting bad and reading ) have difficulty learning to write
as well as to read.
dyslexia Impairment in learning to read and write; probably the
most common learning disability.
In 1895, James Hinshelwood, an eye surgeon, examined some
schoolchildren who were having reading problems, but he could find
nothing wrong with their vision. Hinshelwood was the first to
suggest that children with reading problems were impaired in brain
areas associated with language use. Norman Geshwind and Albert
Galaburda (1985) proposed how such impairment might come about.
Struck by the finding that dyslexia is far more common in boys
than in girls, they reasoned that hormones influence early brain
development. They examined postmortem the brains of a small
sample of people who had dyslexia and found abnormal collections
of neurons, or warts, in and around the brain’s language areas.
This relation between structural abnormalities in the brain and
learning disabilities is further evidence that an intact brain is
necessary for healthy human functioning. Gesh-wind and Galaburda
also found abnormalities in the auditory thalamus, suggesting a
deficit in auditory processing. More recently, brain imaging has
determined that, relative to the brains of healthy participants, activity
is reduced in the left temporoparietal cortex of people with dyslexia.
Michael Merzenich and his colleagues designed a remedial
treatment program based on the assumption that the fundamental
problem in learning disabilities lies in auditory processing,
specifically of language sounds (e.g., Temple et al., 2003).
Remediation involves learning to make increasingly difficult sound
discriminations, for example, discriminating ba and da.
When the sounds are spoken slowly, discriminating between them
is easy, but as they grow briefer and occur faster, discrimination
becomes more difficult. Previous studies using rats and monkeys
showed that discrimination training stimulates neural plasticity in the
auditory system, making it capable of discrimination of sounds that
previously was not possible.
The representative fMRIs shown here reveal decreased activation
in many brain regions in untreated dyslexic children compared with
typical children. With training, dyslexic readers can normalize their
brain activity and presumably its connectivity.
The extent of increased brain activation in the language-related
regions (circled in the images) correlates to the amount of increased
brain activation overall. The results suggest that the remedial
treatment both improves brain function in regions associated with
phonological processing and produces compensatory activation in
related brain regions.
Conclusion: The rat has learned an association between the tone and the
shock, which produces a fear response. Circuits that include the amygdala
take part in this learning process.
Explicit Implicit
Declarative Nondeclarative
Fact Skill
Memory Habit
Locale Taxon
Elaboration Integration
Autobiographical Perceptual
Representational Dispositional
Episodic Procedural
Semantic Nonassociative
Working Reference
Note: This paired list of terms differentiates conscious from unconscious forms of
memory. It will help you relate other memory discussions to the one in this book, which
favors the explicit–implicit distinction.
Color words
Action words
Overlap
FIGURE 14-5 Memory Distribution Blood flow in left-hemisphere
regions increases when participants generate color words (red) and action
words (blue) to describe static black-and-white drawings of objects. Purple
areas indicate overlap. The red region extends into the ventral temporal
lobe, suggesting that object memory is organized as a distributed system.
Objects’ attributes are stored close to the cortical regions that mediate their
perceptions. Parietal lobe activation likely is related to movements
associated with action words, and frontal lobe activation, to the
spontaneously generated behavior. Information from A. Martin, J. V. Haxby,
F. M. Lalonde, C. L. Wiggs, & L. G. Ungerleider (1995). Discrete cortical
regions associated with knowledge of color and knowledge of action.
Science, 270, p. 104.
FIGURE 14-6 Lost Episodes Left to right: Horizontal, frontal, and sagittal
sections imaged in a patient with impaired autobiographical memory. White
arrows point to areas of reduced glucose metabolism in frontal and
temporal regions as she attempts to retrieve remote personal memories.
Her MRI scan was normal. Republished with permission of Elsevier Science and
Technology Journals from “The impairment of recollection in functional amnesic states” Hans J.
Markowitsch and Angelica Staniloiu, Cortex 49 (2013) 1494–1510. Permission conveyed through
Copyright Clearance Center, Inc.
14-1 REVIEW
Connecting Learning and Memory
Before you continue, check your understanding.
1 . An organism learns that some stimulus is paired with a reward. This
is ___________ conditioning.
2 . After learning that consequences follow its behavior, an organism
modifies its behavior. This is ___________ conditioning.
3 . Information that is unconsciously learned forms ___________
memory, whereas specific factual information forms ___________
memory.
4 . ___________ memory is autobiographical and unique to each person.
5 . Where is memory stored in the brain?
Answers appear at the back of the book.
the sensory cortices: first to the parahippocampal and perirhinal regions, then to the
entorhinal cortex, and finally to the hippocampus. The hippocampus feeds back to the
medial temporal regions and then to the neocortical sensory regions.
CLINICAL FOCUS 14-3
Alzheimer Disease
In the 1880s it was noted that the brain may undergo atrophy with
aging, but the reason was not really understood until the German
physician Alois Alzheimer published a landmark study in 1906.
Alzheimer described a set of behavioral symptoms and associated
neuropathology in a 51-year-old woman who was demented. The
cellular structure of her neocortex and allocortex showed various
abnormalities.
An estimated 5.4 million people in the United States have
Alzheimer disease, although the only certain diagnostic test remains
postmortem examination of cerebral tissue. The disease progresses
slowly, and many people with Alzheimer disease probably die of
other causes before the cognitive symptoms incapacitate them.
We knew of a physics professor who continued to work until,
when he was nearly 80, he died of a heart attack. Postmortem
examination of his brain revealed significant Alzheimer pathology.
His colleagues had attributed the professor’s slipping memory to the
old-timer’s disease.
The cause of Alzheimer disease remains unknown, although it has
been variously attributed to genetic predisposition, abnormal levels
of trace elements (e.g., aluminum), immune reactions, slow viruses,
and prions (abnormal, infectious forms of proteins). Two principal
neuronal changes take place in Alzheimer disease:
1. Loss of cholinergic cells in the basal forebrain. One treatment for
Alzheimer disease, therefore, is medication that increases
acetylcholine levels in the forebrain. An example is Exelon, which
is the trade name for rivastigmine, a cholinergic agonist that
appears to provide temporary relief from the progression of the
disease and is available both orally and as a skin patch.
2. Development of neuritic plaques in the cerebral cortex. A
neuritic plaque consists of a central core of homogeneous protein
material ( amyloid ) surrounded by degenerative cellular
fragments. The plaques are not distributed evenly throughout the
cortex but are concentrated especially in temporal lobe areas
related to memory. Neuritic plaques are often associated with
another abnormality, neurofibrillary tangles, paired helical
filaments found in both the cerebral cortex and the hippocampus.
The prion paradigm holds that misfolded tau proteins, illustrated
here, which have the ability to self-propagate, cause many age-
related neurodegenerative diseases (Walker & Jucker, 2015).
Researchers are conducting clinical trials on new drugs that act
either to find and neutralize misfolded proteins or as immunizing
agents, to prevent protein misfolding (Wisniewski & Goni, 2015).
neuritic plaque Area of incomplete necrosis (dead tissue)
consisting of a central protein core (amyloid) surrounded by
degenerative cellular fragments; often seen in the cortex of
people with dementias such as Alzheimer disease.
Cortical neurons begin to deteriorate as the cholinergic loss,
plaques, and tangles develop. The first cells to die are in the
entorhinal cortex (see Figure 14-8 ). Significant memory disturbance
ensues.
A controversial idea emerging from stroke neurologists is that
dementia may reflect a chronic cerebrovascular condition, marginal
high blood pressure. Marginal elevations in blood pressure can lead
to cerebral microbleeds, especially in white matter. The cumulative
effect of years or even decades of tiny bleeds would eventually lead
to increasingly disturbed cognition. This may first appear as mild
cognitive impairment (MCI) that slowly progresses with cumulative
microbleeds.
Thomas Deerinck, MCMIR/Science Source
The red area near the cell nucleus in this false-color brain cell
from a person with Alzheimer disease represents a
neurofibrillary tangle of misfolded tau proteins.
THE FRONTAL LOBE AND SHORT-TERM MEMORY All sensory systems in the
brain send information to the frontal lobe, as do the medial temporal
regions. This information is not used for direct sensory analysis, so it
must have another purpose. In general, the frontal lobe appears to
participate in many forms of short-term memory.
Joaquin Fuster (e.g., Fuster, Bodner, & Kroger, 2000) studied single-
cell activity in the frontal lobe during short-term memory tasks. For
example, if monkeys are shown an object that they must remember for a
short time before being allowed to make a response, neurons in the
prefrontal cortex show sustained firing during the delay. Consider the
tests illustrated in Figure 14-13 :
• In the general design for each test, a monkey is shown a light (the cue),
and after a delay it must make a response to get a reward.
• In the delayed-response task, the monkey is shown two lights in the
choice test and must choose the one that is in the same location as the
cue.
• In the delayed-alternation task, the monkey is again shown two lights
in the choice tests but now must choose the light that is not in the same
location as the cue.
• In the delayed matching-to-sample task, the monkey is shown, say, a
red light, then, after a delay, a red and a green light. The task is to
choose the red light regardless of its new location.
Fuster found that in each task certain cells in the prefrontal cortex fire
throughout the delay. Animals that have not learned the task show no
such cell activity. Curiously, if a trained animal makes an error, its
cellular activity corresponds: the cells stop responding before the error
occurs. They have “forgotten” the cue.
TRACING THE EXPLICIT MEMORY CIRCUIT People who have chronically
abused alcohol can develop an explicit memory disturbance known as
Korsakoff syndrome. In some cases, severe deficits in explicit memory
extend to implicit memory as well. Korsakoff syndrome is caused by a
thiamine (vitamin B1 ) deficiency that kills cells in the medial part of the
diencephalon—the between brain at the top of the brainstem—including
the medial thalamus and mammillary bodies in the hypothalamus. In 80
percent of Korsakoff patients, the frontal lobes show atrophy (loss of
cells). The memory disturbance is probably so severe because the
damage includes not only forebrain but also brainstem structures (see
Clinical Focus 14-4 , Korsakoff Syndrome).
Korsakoff syndrome Permanent loss of the ability to learn new
information (anterograde amnesia) and to retrieve old information
(retrograde amnesia) caused by diencephalic damage resulting from
chronic alcoholism or malnutrition that produces a vitamin B1
deficiency.
Mortimer Mishkin and his colleagues (Mishkin, 1982; Murray, 2000)
proposed a neural circuit for explicit memory that incorporates the
evidence from both humans and laboratory animals with injuries to the
temporal and frontal lobes. Figure 14-14 presents a modified version of
the Mishkin model. Anatomically (Figure 14-4 A), it includes not only
the frontal and temporal lobes but also the medial thalamus, implicated
in Korsakoff syndrome, and the basal forebrain–activating systems
implicated in Alzheimer disease. Figure 14-4 B charts the information
flow:
CLINICAL FOCUS 14-4
Korsakoff Syndrome
Over the long term, alcoholism, especially when accompanied by
malnutrition, obliterates memory. When 62-year-old Joe R. was
hospitalized, his family complained that his memory had become
abysmal. His intelligence was in the average range, and he had no
obvious sensory or motor difficulties. Nevertheless, he could not say
why he was in the hospital and usually stated that he was actually in
a hotel.
When asked what he had done the previous night, Joe R. typically
said that he “went to the Legion for a few beers with the boys.”
Although he had, in fact, been in the hospital, it was a sensible
response because going to the Legion is what he had done on most
nights in the preceding 30 years.
Joe R. was not certain what he had done for a living but believed
he had been a butcher. In fact, he had been a truck driver for a local
delivery firm. His son was a butcher, however, so once again his
story related to something in his life.
Joe’s memory for immediate events was little better. On one
occasion, we asked him to remember having met us; then we left the
room. On our return 2 or 3 minutes later, he had no recollection of
ever having met us or of having taken psychological tests that we
had administered.
Joe R. had Korsakoff syndrome. Sergei Korsakoff was a Russian
physician who in the 1880s first called attention to a syndrome that
accompanies chronic alcoholism. The most obvious symptom is
severe memory loss, including amnesia for both information learned
in the past ( retrograde amnesia ) and information learned since the
onset of the memory disturbance ( anterograde amnesia ).
retrograde amnesia Inability to remember events that took
place before the onset of amnesia.
anterograde amnesia Inability to remember events subsequent
to a disturbance of the brain such as head trauma,
electroconvulsive shock, or neurodegenerative disease.
One unique characteristic of the amnesic syndrome in Korsakoff
patients is that they tend to make up stories about past events rather
than admit that they do not remember. These stories are generally
plausible, like those Joe R. told, because they are based on actual
experiences.
Curiously, Korsakoff patients have little insight into their memory
disturbance and are generally indifferent to suggestions that they
have a memory problem. Such patients are generally apathetic to
what’s going on around them too. Joe R. was often seen watching
television when the set was turned off.
The cause of Korsakoff syndrome is a thiamine (vitamin B1 )
deficiency resulting from poor diet and prolonged intake of large
quantities of alcohol. (In addition to a “few beers with the boys,” Joe
R. had a long history of drinking a 26-ounce bottle of rum every
day.) The thiamine deficiency results in the death of cells in the
midline diencephalon, including especially the medial regions of the
thalamus and the mammillary bodies of the hypothalamus.
Most Korsakoff patients also show cortical atrophy, especially in
the frontal lobe. With the appearance of Korsakoff symptoms, which
can happen suddenly, prognosis is poor. Only about 20 percent of
patients show much recovery after a year on a vitamin B1 –enriched
diet. Joe R. has shown no recovery after several years and will spend
the rest of his life in a hospital setting.
Dr. Peter R. Martin from Alcohol Health & Research World, 9 (Spring
1985), cover
PET scans from a healthy patient (larger image) and a Korsakoff
patient (inset) reveal reduced activity in the frontal lobes of the
diseased brain. (The frontal lobes are at the bottom center of
each scan.) Red and yellow represent areas of high metabolic
activity; activity is lower in the darker areas.
(B)
(B)
Unidirectional Neural Circuit Proposed for
FIGURE 14-15
(B)
Neural Circuit Proposed for Emotional
FIGURE 14-16
The ANS monitors and controls life support functions (Figure 2-30
); the ENS controls the gut (Figure 2-31 ). Section 11-4 reviews the
PAG’s role in pain perception.
Fear is not the only aspect of emotional memory the amygdala codes,
as a study of severely demented patients by Bob Sainsbury and Marjorie
Coristine (1986) nicely illustrates. The patients were believed to have
severe cortical abnormalities but intact amygdalar functioning. The
researchers first established that the patients’ ability to recognize
photographs of close relatives was severely impaired.
The patients were then shown four photographs, one depicting a
relative (either a sibling or a child) who had visited in the past 2 weeks.
The task was to identify the person whom they liked better than the other
three. Although the subjects were unaware they knew anyone depicted in
the photographs, they consistently preferred pictures of their relatives.
This result suggests that although the explicit, and probably the implicit,
memory of the relative was gone, each patient‘s emotional memory
guided his or her preference.
We tend to remember emotionally arousing experiences vividly, a fact
confirmed by findings from both animal and human studies. James
McGaugh (2004) concluded that emotionally significant experiences,
pleasant and unpleasant, must activate hormonal and brain systems that
act to stamp in these vivid memories.
McGaugh noted that many neural systems probably take part, but the
basolateral part of the amygdala is critical. The general idea is that
emotionally driven hormonal and neuro-chemical activating systems
(probably cholinergic and noradrenergic) stimulate the amygdala. The
amygdala in turn modulates the laying down of emotional memory
circuits in the rest of the brain, especially in the medial temporal and
prefrontal regions and in the basal ganglia. We would not expect people
with amygdala damage to have enhanced memory for emotionladen
events, and they do not (Cahill et al., 1995).
Figure 5-17 traces neural activating system connections. Section 6-5
explains how hormones work.
14-3 REVIEW
Neural Systems Underlying Explicit and Implicit Memories
Before you continue, check your understanding.
1 . The two key structures for explicit memory are ___________ and
___________.
2 . A system consisting of the basal ganglia and neocortex forms the
neural basis of the ___________ memory system.
3 . The ___________ and associated structures form the neural basis for
emotional memory.
4 . The progressive stabilization of memories is known as ___________.
5 . Why do we remember emotionally arousing experiences so vividly?
Answers appear at the back of the book.
(B)
Simple task
Both groups were allowed 12,000 finger flexions.
The small-well task was more difficult and required
the learning of a fine motor skill in order to match
performance of the simpler task.
Results
The motor representation of digit, wrist, and arm
was mapped.
Conclusion: The digit representation in the brain of the animal with the
more difficult task is larger, corresponding to the neuronal changes
necessary for the acquired skill.
Information from R. J. Nudo, E. J. Plautz, & G. W. Miliken (1997).
Adaptive plasticity in primate motor cortex as a consequence of
behavioral experience and neuronal injury. Seminars in Neuroscience, 9,
p. 20.
(A)
(B)
FIGURE 14-23 Cortical Reorganization When a hand amputee’s face is stroked
lightly with a cotton swab (A), the person experiences the stroke as a light touch on the
missing hand (B) as well as a touch to the face. The deafferented cortex forms a
representation of the amputated hand on the face. As in the normal somatosensory
homunculus, the thumb is disproportionately large. Information from V. S.
Ramachandran (1993). Behavioral and magnetoencephalographic correlates of
plasticity in the adult human brain. Proceedings of the National Academy of Sciences
USA, 90, p. 10418.
The idea that experience can alter cortical maps can be demonstrated
with other experiences. For example, if animals are trained to make
certain digit movements over and over again, the cortical representation
of those digits expands at the expense of the remaining motor areas.
Similarly, if animals are trained extensively to discriminate among
different sensory stimuli such as tones, the auditory cortical areas
responding to those stimuli increase in size.
Focus 11-5 recounts Ramachandran’s therapy for minimizing
phantom limb pain.
As described in Research Focus 14-5 , Movement, Learning, and
Neuroplasticity, one effect of musical training is to alter the motor
representations of the digits used to play different instruments. We can
speculate that musical training probably alters the auditory
representations of specific sound frequencies as well. Both changes are
essentially forms of memory, and the underlying synaptic changes likely
take place on the appropriate sensory or motor cortical maps.
Sections 10-4 and 15-4 discuss music’s benefits for the brain.
RESEARCH 14-5
Epigenetics of Memory
An enigma in the search for neural mechanisms underlying memory is the
fact that whereas memories remain stable over time, all cells are constantly
undergoing molecular turnover. The simplest explanation for this is
epigenetic: specific sites in the DNA of neurons involved in specific
memories might exist in either a methylated or a nonmethylated state.
Courtney Miller and colleagues (2010) tested this idea directly by
measuring methylation in the hippocampi of rats that underwent contextual
fear conditioning (see Experiment 14-1 ). They showed that fear
conditioning is associated with rapid methylation, but if they blocked
methylation, there was no memory. The investigators conclude that
epigenetic mechanisms mediate synaptic plasticity broadly, but especially in
learning and memory. One implication of these results is that cognitive
disorders, including memory defects, could result from aberrant epigenetic
modifications (for a review, see Day et al., 2015).
Figure 3-25 illustrates two aspects of methylation: histone and DNA
modification.
Neurotrophin 3 (NT-3)
Ciliary neurotrophic factor (CNTF)
Protease nexin I, II
Results
Conclusion: Nerve growth factor stimulates dendritic growth and
increased spine density in both healthy and injured brains. These
neuronal changes correlate with improved motor function after stroke.
Information from B. Kolb, S. Cote, A. Ribeiro-da-Silva, & A. C. Cuello
(1997). Nerve growth factor treatment prevents dendritic atrophy and
promotes recovery of function after cortical injury. Neuroscience, 76, p.
1146.
FIGURE 14-26 Stem Cells Do the Trick Left: After cortical stroke—
damage is visible at upper right—infusion of epidermal growth factor into a
rat’s lateral ventricle induced neurogenesis in the subventricular zone.
Right: The stem cells migrated to the site of injury and filled in the
damaged area. But the cytological organization is abnormal.
Although they did not integrate well into the existing brain, the new
cells influenced behavior and led to functional improvement (Kolb et al.,
2007). The mechanism of influence is poorly understood. The new
neurons apparently had some trophic influence on the surrounding
uninjured cortex. Preliminary clinical trials with humans are under way
and so far show no ill effects in volunteers.
14-5 REVIEW
Recovery from Brain Injury
Before you continue, check your understanding.
1 . Three ways to compensate for the loss of neurons are (1)
___________, (2) ___________, and (3) ___________.
2 . Two ways of using electrical stimulation to enhance postinjury
recovery are ___________ and ___________.
3 . Endogenous stem cells can be recruited to enhance functional
improvement by using ___________ factors.
4 . What is the lesson of the three-legged cat?
Answers appear at the back of the book.
15
Split Brain
Epileptic seizures may begin in a restricted region of one brain
hemisphere and spread through the fibers of the corpus callosum to
the corresponding location in the opposite hemisphere. To prevent
the spread of seizures that cannot be controlled through medication,
neurosurgeons sometimes sever the 200 million nerve fibers of the
corpus callosum.
The procedure is medically beneficial for many people with
epilepsy, leaving them virtually seizure free, with only minimal
effects on their everyday behavior. In special circumstances,
however, the aftereffects of a severed corpus callosum become more
readily apparent, as extensive psychological testing by Roger Sperry,
Michael Gazzaniga, and their colleagues (Sperry, 1968; Gazzaniga,
1970) has demonstrated.
On close inspection, such split-brain patients reveal a unique
behavioral syndrome that offers insight into the nature of cerebral
asymmetry. Cortical asymmetry is essential for such integrative tasks
as language and body control.
split brain Surgical disconnection of the hemispheres by
severing the corpus callosum.
One split-brain subject was presented with several blocks. Each
block had two red sides, two white sides, and two half-red and half-
white sides, as illustrated. The task was to arrange the blocks to form
patterns identical with those shown on cards.
When the subject used his right hand to perform the task, he had
great difficulty. His movements were slow and hesitant. In contrast,
when he performed the task with his left hand, his solutions were not
only accurate but also quick and decisive.
Findings from studies of other split-brain patients have shown
that, as tasks of this sort become more difficult, left-hand superiority
increases. Participants whose brain is intact perform equally well
with either hand, indicating the intact connection between the two
hemispheres. But in split-brain subjects, each hemisphere works on
its own.
Apparently, the right hemisphere, which controls the left hand,
has visuospatial capabilities that the left hemisphere does not.
In this experiment, a split-brain patient’s task is to arrange a set
of blocks to match the pattern shown on a card. Information from M. S.
Gazzaniga, R. B. Ivry, and G. R. Mangun (1999). Cognitive Science: The Biology of the
Mind (p. 323). New York: Norton.
Studies of split-brain patients reveal that the left and right cerebral
hemispheres engage in fundamentally different types of thinking. Yet
typically we are unaware of these brain asymmetries. In this chapter we
examine the neural systems and subsystems that control thinking. In the
mammalian brain these systems are in the cortex.
Our first task is to define the mental processes we wish to study—to
ask, what is the nature of thought? Then we consider the cortical regions
—for vision, audition, movement, and associative function—that play
major roles in thinking. We examine how these cortical connections are
organized into such systems and subsystems as the dorsal and ventral
visual streams and how neuroscientists study them.
Next we explore the brain’s asymmetrical organization and delve
deeper into split-brain phenomena. Another distinguishing feature of
human thought is the different ways that individual people think. We
consider several sources of these differences, including those related to
gender and to what we call intelligence. Finally, we address
consciousness and how it may relate to the neural control of thought.
15-1 The Nature of Thought
Studying abstract mental processes such as thought, language, memory,
emotion, and motivation is tricky. They cannot be seen but can only be
inferred from behavior and are best thought of as psychological
constructs, ideas that result from a set of impressions. The mind
constructs the idea as being real, even though it is not tangible.
psychological construct Idea or set of impressions that some
mental ability exists as an entity; memory, language, and emotion
are examples.
We run into trouble when we try to locate constructs such as thought
or memory in the brain. That we have words for these constructs does
not mean that the brain is organized around them. Indeed, it is not. For
instance, although people talk about memory as a unitary thing, the brain
neither treats memory as unitary nor localizes it in one particular place.
The many forms of memory are each treated differently by widely
distributed brain circuits. The psychological construct of memory that we
think of as being a single thing turns out not to be unitary at all.
Assuming a neurological basis for psychological constructs such as
memory and thought is risky, but we certainly should not give up
searching for where and how the brain produces them. After all, thought,
memory, emotion, motivation, and other such constructs are the most
interesting activities the brain performs.
Psychologists typically use the term cognition (knowing) to describe
thought processes, that is, how we come to know about the world. For
behavioral neuroscientists, cognition usually entails the ability to pay
attention to stimuli, whether external or internal, to identify stimuli, and
to plan meaningful responses to them. External stimuli cue neural
activity in our sensory receptors. Internal stimuli can spring from the
autonomic nervous system (ANS) as well as from neural processes—
from constructs such as memory and motivation.
cognition Act or process of knowing or coming to know; in
psychology, refers to thought processes.
Section 14-1 details types of memory, Section 14-4 how
neuroplasticity contributes to memory processing and storage.
• Perhaps most important, human language has syntax —sets of rules for
putting words together to create meaningful utterances.
syntax Ways in which words are put together; proposed to be
unique to human language.
Linguists argue that although other animals, such as chimpanzees, can
use and recognize vocalizations (about three dozen for chimps), they do
not rearrange these sounds to produce new meanings. This lack of
syntax, linguists maintain, makes chimpanzee language literal and
inflexible. Human language, in contrast, has enormous flexibility that
enables us to talk about virtually any topic, even highly abstract ones like
psychological constructs. In this way, our thinking is carried beyond a
rigid here and now.
Before you accept the linguists’ position, review Focus 1-2,
featuring the chimp Kanzi.
Neurologist Oliver Sacks illustrated the importance of syntax to
human thinking in his description of Joseph, an 11-year-old deaf boy
who was raised without sign language for his first 10 years and so was
never exposed to syntax. According to Sacks:
Joseph saw, distinguished, used; he had no problems with perceptual
categorization or generalization, but he could not, it seemed, go much
beyond this, hold abstract ideas in mind, reflect, play, plan. He seemed
completely literal—unable to juggle images or hypotheses or
possibilities, unable to enter an imaginative or figurative realm…. He
seemed, like an animal, or an infant, to be stuck in the present, to be
confined to literal and immediate perception. (Sacks, 1989, p. 40)
Language, including syntax, develops innately in children because the
human brain is programmed to use words in a form of universal
grammar. But in the absence of words—either spoken or signed—no
grammar can develop. Without the linguistic flexibility that grammar
allows, no “higher-level” thought can emerge. Without syntactical
language, thought is stuck in the world of concrete, here-and-now
perceptions. Syntax, in other words, influences the very nature of our
thinking.
In addition to arranging words in syntactical patterns, the human brain
has a passion for stringing together events, movements, and thoughts.
We combine musical notes into melodies, movements into dance, images
into videos. We design elaborate rules for games and governments. To
conclude that the human brain is organized to chain together events,
movements, and thoughts seems reasonable. Syntax is merely one
example of this innate human way of thinking about the world.
We do not know how this propensity to string things together evolved,
but one possibility is natural selection: stringing movements together
into sequences is highly adaptive. It would allow for building houses or
weaving threads into cloth, for instance.
William Calvin (1996) proposed that the motor sequences most
important to ancient humans were those used in hunting. Throwing a
rock or a spear at a moving target is a complex act that requires much
planning. Sudden ballistic movements, such as throwing, last less than an
eighth of a second and cannot be corrected by feedback. The brain has to
plan every detail and then spit them out as a smooth-flowing sequence.
Figure 11-2 diagrams the frontal lobe hierarchy that initiates motor
sequences.
Today, a football quarterback does just this when he throws a football
to a receiver running a zigzag pattern to elude a defender. A skilled
quarterback can hit the target on virtually every throw, stringing
movements together rapidly in a continuous sequence with no pauses or
gaps. This skill is uniquely human. Chimpanzees can throw, but their
throws are inaccurate. No chimpanzee could learn to throw a ball to hit a
moving target.
The human predisposition to sequence movements may have
encouraged language development. Spoken language, after all, is a
sequence of movements involving the throat, tongue, and mouth
muscles. Viewed in this way, language is the by-product of a brain that
was already predisposed to operate by stringing movements, events, or
even ideas, together.
A critical characteristic of human motor sequencing is our ability to
create novel sequences with ease. We constantly produce new sentences.
Composers and choreographers earn their living making new music and
dance sequences. Novel movement or thought sequences are a product of
the frontal lobe.
People with frontal lobe damage have difficulty generating novel
solutions to problems. They are described as lacking imagination. The
frontal lobes are critical not only for organizing behavior but also for
organizing thinking. One major difference between the human brain and
other primates’ brains is the size of the frontal lobes.
Parrot
Human
Animal Intelligence
Intelligent animals think. We all know that parrots can talk, but most
of us assume that no real thought lies behind their words. An African
grey parrot named Alex proved otherwise. Irene Pepperberg,
pictured here with Alex, studied his ability to think and use language
for more than three decades (Pepperberg, 1990, 1999, 2006).
A typical session with Alex and Pepperberg proceeded roughly as
follows (Mukerjee, 1996): Pepperberg would show Alex a tray with
four corks. “How many?” she would ask. “Four,” Alex would reply.
She then might show him a metal key and a green plastic one.
“What toy?”
“Key.”
“How many?”
“Two.”
“What’s different?”
“Color.”
Alex did not just have a vocabulary: words had meaning to him.
He correctly applied English labels to numerous colors (red, green,
blue, yellow, gray, purple, orange), shapes (two-, three-, four-, five-,
six- cornered), and materials (cork, wood, rawhide, rock, paper,
chalk, wool). He also labeled various items made of metal (chain,
key, grate, tray, toy truck), wood (clothespin, block), and plastic or
paper (cup, box).
Most surprisingly, Alex used words to identify, request, and
refuse items. He responded to questions about abstract ideas, such as
the color, shape, material, relative size, and quantity of more than
100 objects.
Alex’s thinking was often quite complex. Presented with a tray
that contained seven items—a circular rose-colored piece of rawhide,
a piece of purple wool, a three-cornered purple key, a four-cornered
yellow piece of rawhide, a five-cornered orange piece of rawhide, a
six-cornered purple piece of rawhide, and a purple metal box—and
then asked, “What shape is the purple hide?” Alex would answer
correctly, “Six-corner.”
To come up with this answer, Alex had to comprehend the
question, locate the correct object of the correct color, determine the
answer to the question about the object’s shape, and encode his
answer into an appropriate verbal response. This task was not easy.
After all, four objects were pieces of rawhide and three objects were
purple.
Alex could not respond just to one attribute. Rather, he had to
combine the concepts of rawhide and purple and find the object that
possessed them both. Then he had to figure out the object’s shape.
Clearly, considerable mental processing was required, but Alex
succeeded at such tasks time and again.
Alex also demonstrated that he understood what he said. If he
requested one object and was presented with another, he was likely
to say no and repeat his original request. In fact, when given
incorrect objects on numerous occasions in formal testing, he said no
and repeated his request 72 percent of the time, said no without
repeating his request 18 percent of the time, and made a new request
the other 10 percent of the time.
These responses suggest that Alex’s requests led to an expectation
in his mind. He knew what he was asking for, and he expected to get
it.
Wm. Munoz
The African grey parrot Alex, shown here with Irene Pepperberg
and a sampling of the items he could count, describe, and
answer questions about. Alex died in 2007 at the age of 31.
So how does the activity of any given neuron correlate with the
perceptual threshold for apparent motion? On the one hand, if our
perception of apparent motion results from the summed activity of many
dozens or even thousands of neurons, little correlation would exist
between the activity of any one neuron and the perception. On the other
hand, if individual neurons influence our perception of apparent motion,
then a strong correlation would exist between the activity of a single cell
and the perception.
The results of Experiment 15-1 are unequivocal: the sensitivity of
individual neurons is very similar to the perceptual sensitivity of the
monkeys to apparent motion. As shown in the Results section, if
individual neurons failed to respond to the stimulus, the monkeys
behaved as if they did not perceive any apparent motion.
This finding is curious. Given the large number of V5 neurons, it
seems logical that perceptual decisions are based on the responses of a
large pool of neurons. But Newsome’s results show that the activity of
individual cortical neurons is correlated with perception rather than
perception being the property of a particular brain region.
Still, Hebb’s idea of a cell assembly—an ensemble of neurons that
represents a complex concept—suggests some way of converging the
inputs of individual neurons to arrive at a consensus. Here, the neuronal
ensemble represents a sensory event (apparent motion) that the activity
of the ensemble detects. Cell assemblies could be distributed over fairly
large regions of the brain, or they could be confined to smaller areas,
such as cortical columns.
Figures 9-33 to 9-35 diagram functional columns in occipital and
temporal cortices.
Hebb’s Cell Assembly “Cells that fire together wire together.” Image from D.
O. Hebb (1949). The Organization of Behavior (Figure 9, p. 71). New York: McGraw-Hill.
Primary motor
Primary sensory
Primary visual
Primary auditory
Primary olfactory and taste
FIGURE 15-1Cortical Functions Lateral view of the left hemisphere
and medial view of the right hemisphere, showing the primary motor and
sensory areas. All remaining cortical areas are collectively referred to as
the association cortex, which functions in thinking.
Multisensory Integration
Our knowledge about the world comes through multisensory channels.
When we see and hear a barking dog, the visual information and auditory
information fit together seamlessly. How do all our neural systems and
functional levels combine to afford us a unified conscious experience?
Philosophers, impressed with this integrative capacity, identified the
binding problem, which asks how the brain ties its single and varied
sensory and motor events together into a unified perception or behavior.
It is gradually becoming clear how the brain binds up our perceptions
and how this ability is gradually acquired in postnatal life (see the review
by Stein & Rowland, 2011).
binding problem Philosophical question focused on how the brain
ties single and varied sensory and motor events together into a
unified perception or behavior.
One solution to the sensory integration aspect of the binding problem
lies in multimodal regions of the association cortex, that is, regions
populated by neurons that respond to information from more than one
sensory modality, as illustrated in Figure 15-4 . Investigators presume
that multimodal regions combine characteristics of stimuli across
different senses when we encounter them separately or together. For
example, the fact that we can visually identify objects that we have only
touched implies a common perceptual system linking the visual and
somatic circuits. In Section 15-5 , Clinical Focus 15-6 profiles a man in
whom stimulation from one sensory modality (taste) concurrently
induces the experience of a different modality (touch).
The senses of smell and taste combine to produce the experience of
flavor; see Section 12-2 .
KEY (multimodal cortex)
Spatial Cognition
The location of objects is just one aspect of what we know about space.
Spatial cognition encompasses a whole range of mental functions that
vary from navigational ability (getting from point A to point B) to
mentally manipulating complex visual arrays like those shown in Figure
15-5 .
Imagine going for a walk in an unfamiliar park. You do not go around
and around in circles. Rather, you proceed in an organized, systematic
way. You also need to find your way back. These abilities require a
representation of the physical environment in your mind’s eye.
At some time during the walk, let’s assume you are uncertain where
you are—a common problem. One solution is to make a mental image of
your route, complete with various landmarks and turns. It is a small step
from mentally manipulating these kinds of navigational landmarks and
movements to manipulating other kinds of images in your mind. Thus
the ability to mentally manipulate visual images seems likely to have
arisen in parallel with the ability to navigate in space.
FIGURE 15-5 Spatial Cognition These two figures are the same but
are oriented differently in space. Researchers test spatial cognition by
giving subjects pairs of stimuli like this and asking if the shapes are the
same or different.
Attention
Imagine you’re meeting some friends at a football game. You search for
them as you meander through the crowd in the stadium. Suddenly you
hear one friend’s distinctive laugh and turn to scan in that direction. You
see your group and rush to join them.
This everyday experience demonstrates the nature of attention,
selective narrowing or focusing of awareness to part of the sensory
environment or to a class of stimuli. Even as sounds, smells, feelings,
and sights bombard you, you still can detect a familiar laugh or spot a
familiar face: you can direct your attention.
attention Narrowing or focusing awareness to a part of the sensory
environment or to a class of stimuli.
More than 100 years ago, William James (1890) defined attention: “It
is the taking possession by the mind in clear and vivid form of one out of
what seem several simultaneous objects or trains of thought.” James’s
definition goes beyond our example of locating friends in a crowd,
inasmuch as he notes that we can attend selectively to thoughts as well as
to sensory stimuli. Who hasn’t at some time been so preoccupied with a
thought as to exclude all else from mind? So attention can be directed
inward as well as outward.
Selective Attention
As with many other inferred mental processes, studying the neural basis
of attention is challenging. Research with monkeys trained to attend to
particular locations or visual stimuli, however, has identified neurons in
the cortex and midbrain that show enhanced firing rates to particular
locations or visual stimuli. Significantly, the same stimulus can activate a
neuron at one time but not at another, depending on the monkey’s
learned focus of attention.
The answer to the mental manipulation in Figure 15-6 is a.
EXPERIMENT 15-2
Results
During performance of this task, researchers recorded the
firing of neurons in visual area V4, which are sensitive to color
and form. Stimuli were presented in either rewarded or
unrewarded locations.
Pretraining recordings:
Pretraining recordings:
In an extinction test, the patient is asked to keep his or her eyes fixed
on the examiner’s face and to report objects presented in one or both
sides of the visual field. When presented with a single object (a fork) to
one side or the other, the patient orients himself or herself toward the
appropriate side of the visual field, so we know that he or she cannot be
blind on either side. But now suppose that two forks are presented, one
on the left and one on the right. The patient ignores the fork on the left
and reports the one on the right. When asked about the left side, the
patient is quite certain that nothing appeared there and that only one fork
was presented, on the right.
Perhaps the most curious aspect of neglect is that people who have it
fail to pay attention not only to one side of the physical world around
them but also to one side of the world represented in their mind. We
studied one woman who had complete neglect for everything on her left
side. She complained that she could not use her kitchen because she
could never remember the location of anything on her left.
We asked her to imagine standing at the kitchen door and to describe
what was in the various drawers on her right and left. She could not
recall anything on her left. We then asked her to imagine walking to the
end of the kitchen and turning around. We again asked her what was on
her right, the side of the kitchen that had previously been on her left. She
broke into a big smile and tears ran down her face as she realized that
she now knew what was on that side of the room. All she had to do was
reorient her body in her mind’s eye. She later wrote and thanked us for
changing her life, because she was now able to cook again. Clearly,
neglect can exist in the mind as well as in the physical world.
Although complete contralateral neglect is usually associated with
parietal lobe injury, specific forms of neglect can arise from other
injuries. Ralph Adolphs and his colleagues (2005) describe the case of S.
M., a woman with bilateral amygdala damage who could not recognize
fear in faces. On further study, the reason was discovered: S. M. failed to
look at the eyes when she looked at faces; instead, she looked at other
facial features such as the nose. Because fear is most clearly identified in
the eyes, not the nose, she did not identify the emotion. When she was
specifically instructed to look at the eyes, her recognition of fear became
entirely normal. Thus, the amygdala plays a role in directing attention to
the eyes to identify facial expressions.
Planning
At noon on a Friday, a friend proposes that she and you go to a nearby
city for the weekend to attend a concert. She will pick you up at 6:00 P.M
. and you will drive there together.
Because you are completely unprepared for this invitation and
because you are going to be busy until 4:00, you must rush home and get
organized. En route you stop at a fast food restaurant so that you won’t
be hungry on the 2-hour drive. You also need cash, so you head to the
nearest ATM. When you get home, you grab various pieces of clothing
appropriate for the concert and the trip. You also pack your toiletries.
You somehow manage to get ready by 6:00, when your friend arrives.
Although the task of getting ready in a hurry may make us a bit
harried, most of us can manage it. People with frontal lobe injury cannot.
To learn why, let’s consider what the task requires.
1. To plan your behavior, you must select from many options. What do
you need to take with you? Cash? Then which ATM is closest, and
what is the quickest route to it? Are you hungry? Then what is the
fastest way to get food on a Friday afternoon?
2. In view of your time constraint, you have to ignore irrelevant stimuli.
If you pass a sign advertising a sale in your favorite store, for instance,
you have to ignore it and persist with the task at hand.
3. You have to keep track of what you have done already, a requirement
especially important while you are packing. You do not want to forget
anything or pack duplicates. You do not want to take four pairs of
shoes but no toothbrush.
• Group 1: Single black dot, two red dots, three blue dots, four green
dots.
• Group 2: Single blue square, two green squares, three red squares, four
black squares.
• Group 3: Single green star, two blue stars, three black stars, four red
stars.
• Group 4: Single red diamond, two black diamonds, three green
diamonds, four blue diamonds
FIGURE 15-8 Wisconsin Card Sorting Test The subject receives a deck of
cards containing multiple copies of those represented here and is presented with a row
of four cards selected from among them. The task is to place each card in the pile under
the appropriate card in the row, sorting by one of three possible categories. Subjects are
never explicitly told what the correct sorting category is—color, number, or form; they are
told only whether their responses are correct or incorrect. After subjects have begun
sorting by one category, the tester unexpectedly changes to another category.
Neuroscience
Sophisticated noninvasive stimulation and recording techniques for
measuring the brain’s electrical activity and noninvasive brain imaging
methods led to a major shift in studying brain and behavior: cognitive
neuroscience, the field that studies the neural bases of cognition.
Cognitive neuroscience focuses on high-tech research methods but
continues to rely on the decidedly low-tech tools of neuropsychological
assessment—behavioral tests that compare the effects of brain injuries
on performing particular tasks. Clinical Focus 15-4 , Neuropsychological
Assessment, illustrates its benefits.
cognitive neuroscience Study of the neural bases of cognition.
CLINICAL FOCUS 15-4
Neuropsychological Assessment
In this high-tech age of PET, fMRI, and ERP, low-tech behavioral
assessment endures as among the best, simplest, most economical
ways to measure cognitive function.
To illustrate the nature and power of neuropsychological
assessment, we compare three patients’ test performance on an array of
tests selected from among those used in a complete
neuropsychological assessment. The five tests presented here measure
verbal and visual memory, verbal fluency, abstract reasoning, and
reading. Performance was compared with that of a healthy control
participant.
In the delayed memory tests—one verbal, the other visual—the
patients and the control were read a list of words and two short stories.
They were also shown a series of simple drawings. Their task: repeat
the words and stories immediately after hearing them and draw the
simple figures.
Half an hour later, without warning, they were asked to perform the
tasks again. Their performance on these tests yielded the delayed
verbal and visual memory scores listed in the table.
In the delayed verbal fluency test, patients and control had 5
minutes to write down as many words as they could think of that start
with the letter s, excluding numbers and people’s names. Then came
the Wisconsin Card Sorting Test, which assesses abstract reasoning
(see Figure 15-8 ). Finally, all were given a reading test.
Subjects’ Scores
Test Control J. N. E. B. J. w.
Reading 15 21 22 17
* Atypically poor score.
The first patient, J. N., was a 28-year-old man who had developed a
tumor in the anterior and medial left temporal lobe. His preoperative
psychological tests showed superior intelligence scores. His only
significant deficits appeared on tests of verbal memory.
When we saw J. N. a year after surgery that successfully removed
the tumor, he had returned to his job as a personnel manager. His
intelligence was still superior, but as the score summary shows, he was
impaired on the delayed verbal memory test, recalling only about 50
percent as much as the control and other subjects.
The second patient, E. B., was a college senior majoring in
psychology. An aneurysm in her right temporal lobe had burst due to a
bulge in the artery. The anterior part of that lobe had been removed. E.
B. was of above-average intelligence and completed her undergraduate
degree with good grades. Her score on the delayed visual memory test,
just over half the scores of the other test-takers, clearly showed her
residual deficit.
The third patient, also of above-average intelligence, was J. W., a
42-year-old police detective who had a college degree. A benign tumor
had been removed from his left frontal lobe.
We saw J. W. 10 years after his surgery. He was still on the police
force but working a desk job. His verbal fluency was markedly
reduced, as was his ability to solve the card-sorting task. His reading
skill, however, was unimpaired. This was also true of the other
patients.
Two principles emerge from the results of these three neuropsycho-
logical assessments:
1. Brain functions are localized. Damage to different brain regions
produces different symptoms.
2. Brain organization is asymmetrical. Left-hemisphere damage
preferentially affects verbal functions; right-hemisphere damage
preferentially affects nonverbal functions.
Social Neuroscience
By combining cognitive neuroscience tools, especially functional
neuroimaging, with abstract constructs from social psychology, social
neuroscience seeks to understand how the brain mediates social
interactions. Matthew Lieberman (2007) identified broad themes that
attempt to encompass all cognitive processes involved in understanding
and interacting with others and in understanding ourselves.
social neuroscience Interdisciplinary field that seeks to understand
how the brain mediates social interactions.
Understanding Others
Animals’ mind and experiences are not open to direct inspection. We
infer animals’ mind in part by observing their behaviors and peoples’ mind
by listening to their words. In doing so, we may develop a theory of mind,
the attribution of mental states to others. Theory of mind includes an
understanding that others may have feelings and beliefs different from our
own. This broader understanding has led some investigators to conclude
that theory of mind may be uniquely human. But many researchers who
study apes strongly believe that other apes, too, possess a theory of mind.
theory of mind Ability to attribute mental states to others.
Many fMRI studies over the past decade suggest that the brain region
believed most closely associated with theory of mind is the dorsolateral
prefrontal cortex (see Figure 15-2 ).
Human prefrontal regions are disproportionately large as corrected for
brain size, but other apes also have large prefrontal regions. The anatomy
supports the likelihood that they also possess a theory of mind.
The capacity to understand others can also be inferred from the
presence of empathy. For example, when participants watch videos of
others smelling disgusting odors, they report a feeling of disgust.
Lieberman and his colleagues (Rameson et al., 2012) used fMRI to assess
the neural correlates of empathy by asking participants to empathize with
sad images. Empathy correlated with increased activity in the medial
prefrontal region, suggesting that the area is critical for empathic
experience.
The Povinelli Group LLC
Looking at its reflection and pointing to a dot placed on its forehead,
this chimpanzee displays self-recognition, a cognitive ability
possessed by higher primates.
Understanding Oneself
Not only are we humans aware of others’ intentions, we also have a sense
of self. Humans and apes have the ability to recognize themselves in a
mirror, an ability that human infants demonstrate by about 21 months of
age. Studies using fMRI show that when we recognize our own face versus
the face of familiar others, brain activity increases in the right lateral
prefrontal cortex and in the lateral parietal cortex. The parietal cortex
activation is thought to reflect the body’s recognition of what itself feels
like.
But self-recognition is only the beginning of understanding oneself.
People also have a self-concept that includes beliefs about their own
personal traits (e.g., kind, intelligent). When participants are asked to
determine whether trait words or sentences are self-descriptive, brain
activity in medial prefrontal regions increases.
Self-Regulation
Self-regulation is the ability to control emotions and impulses as a means
of achieving long-term goals. We may wish to yell at the professor because
an exam was unfair, but most of us recognize that this course will not be
productive. Dynamic imaging studies again reveal that prefrontal regions
are critical in social cognition, in this case in self-regulation.
Children are often poor self-regulators, which probably reflects slow
development of the prefrontal regions responsible for impulse control. A
uniquely human ability to self-regulate is by putting feelings into words, a
strategy that allows us to control emotional outbursts. Curiously, such
verbal labeling is associated with increased activity in the right lateral pre-
frontal regions but not in the left.
Section 8-2 describes unique aspects of frontal lobe development that
extend beyond childhood—up to age 30.
Not only can humans control their emotions; they also have
expectations about how a stimulus might feel (e.g., an injection by
syringe). Our expectations can alter the actual feeling of an event. It is
common for people to say ouch when they do something like stub a toe,
even if they actually feel no pain. Nobukatsu Sawamoto and colleagues
(2000) found that when participants expect pain, activity increases in the
anterior cingulate cortex (see Figure 15-2 B), a region associated with pain
perception, even if the stimulus turns out not to be painful.
Feeling and treating physical pain is a topic in Section 11-4 . Focus
12-1 reports on emotional pain.
Living in a Social World
We spend much of our waking time interacting with others socially. In a
sense, our understanding of our self and our social interactions link
together into a single mental action. One important aspect of this behavior
includes forming attitudes and beliefs about ourselves and about others.
When we express attitudes (including prejudices) toward ideas or human
groups, brain imaging shows activation in prefrontal, anterior cingulate,
and lateral parietal regions.
Samuel McLure and colleagues (2004) took advantage of the fact that
many people have strong attitudes toward cola-flavored sodas. Coca-Cola
and Pepsi Cola are nearly identical in chemical composition, yet
subjectively, people routinely prefer one to the other. The researchers ran
blind taste tests among people who stated a cola brand preference.
As a group, participants failed to discriminate the drinks accurately
when they were presented in a blind taste test. Brain activity was also
equivalent for each cola in the blind condition. However, when participants
believed that they were drinking Coca-Cola, significant changes in brain
activity were recorded in many regions, including the hippocampus and
dorsolateral prefrontal cortex. The investigators concluded that cultural
information biases brain systems, which in turn biases attitudes.
Social Cognition and Brain Activity
Social cognitions running the gamut from understanding ourselves to
understanding others clearly are associated with activation of specific brain
regions, especially prefrontal regions. The obvious conclusion is that
prefrontal activity produces our social cognitions, just as activity in visual
regions produces our visual perceptions. But this conclusion has proved
controversial. Ed Vul and colleagues (2009, 2012) go so far as to suggest
that “correlations in social neuroscience are voodoo.” Their assertion has
led to strong disputations (e.g., Lieberman et al., 2012). The arguments are
complex, focus on the nature of the analysis of fMRI data, and will
certainly continue. In our view, the debate does not impugn the general
conclusion that brain states produce behavioral states.
Neuroeconomics
Historically, economics was a discipline based on the “rational actor,” the
belief that people make rational decisions. In the real world, people often
make decisions based on assumption or intuition, as is common in
gambling. Why don’t people always make rational decisions?
Leonard Mlodinow’s wonderful 2009 book, The Drunkard’s Walk:
How Randomness Rules Our Lives, offers many everyday examples.
The cerebral processes underlying human decision making are not
easily inferred from behavioral studies. But investigators in the field of
neuroeconomics, which combines ideas from economics, psychology, and
neuroscience, attempt to explain those processes by studying patterns of
brain activity as people make decisions in real time. The general
assumption among neuroeconomists is that two neural decision pathways
influence our choices. One is deliberate, slow, rule-driven, and emotionally
neutral; it acts as a reflective system. The other pathway—fast, automatic,
emotionally biased—forms a reflexive system.
neuroeconomics Interdisciplinary field that seeks to understand how
the brain makes decisions.
If people must make quick decisions they believe will provide
immediate gain, widespread activity appears in the dopaminergic reward
system. This includes the ventromedial prefrontal cortex and ventral
striatum (nucleus accumbens): the reflexive pathway. If slower,
deliberative decisions are possible, activity is greater in the lateral
prefrontal, medial temporal, and posterior parietal cortex, the areas that
form the reflective pathway.
Figure 6-17 maps the dopaminergic pathways associated with reward.
Neuroeconomists are looking to identify patterns of neural activity in
everyday decision making, patterns that may help account for how people
make decisions about their finances, social relations, and other personal
choices. Although most neuroeconomic studies to date have used fMRI, in
principle these studies could also use other noninvasive imaging
technologies. Epigenetic factors probably contribute to developing the
balance between the reflective and reflexive systems in individuals.
Epigenetic studies therefore may help explain why many people make
decisions that are not in their long-term best interest.
Because of predictions that all mammals will have similar reflective and
reflexive decision systems and because all animals make decisions, the
neural bases of decision processes in nonhumans undoubtedly will receive
more study in the future.
15-3 REVIEW
Expanding Frontiers of Cognitive Neuroscience
Before you continue, check your understanding.
1 . Noninvasive imaging techniques enable cognitive psychologists to
investigate the neural bases of thought in the “normal” brain, leading
to the field called ___________.
2 . Imaging methods such as DTI and fcMRI are allowing researchers to
develop a ___________, a map of the complete structural and
functional fiber pathway connections in the living human brain.
3 . The role of the ___________ in cognition, formerly unappreciated, is
now attracting researchers’ attention.
4 . Social neuroscience is an interdisciplinary field that seeks to
understand how the brain mediates ___________.
5 . Our attribution of mental states to others is known as ___________.
6 . Neuroeconomics seeks to understand the neural bases of
___________.
7 . List four general themes of social neuroscience research.
Answers appear at the back of the book.
Anatomical Asymmetry
Building on Broca’s findings, investigators have learned how the
language- and music-related areas of the left and right temporal lobes
differ anatomically. In particular, the primary auditory area is larger on
the right, whereas the secondary auditory areas are larger on the left in
most people. Other brain regions also are asymmetrical.
Figure 15-11 shows that the lateral fissure, which partly separates the
temporal and parietal lobes, has a sharper upward course in the right
hemisphere relative to the left. As a result, the posterior right temporal
lobe is larger than the same region on the left, as is the left parietal lobe
relative to the right.
Among the anatomical asymmetries in the frontal lobes, the region of
sensorimotor cortex representing the face is larger in the left hemisphere
than in the right, a difference that presumably corresponds to the left
hemisphere’s special role in talking. Broca’s area is organized differently
on the left and the right. The area visible on the brain’s surface is about
one-third larger on the right than on the left, whereas the cortical area
buried in the sulci of Broca’s area is greater on the left than on the right.
Not only do these gross anatomical differences exist but so too do
hemispheric differences in the details of their cellular and neurochemical
structures. For example, neurons in Broca’s area on the left have larger
dendritic fields than do corresponding neurons on the right. The discovery
of structural asymmetries told us little about the reasons for such
differences, but ongoing research is revealing that they result from
underlying differences in cognitive processing by the brain’s two sides.
Although many anatomical asymmetries in the human brain are related
to language, brain asymmetries are not unique to humans. Most if not all
mammals have asymmetries, as do many bird species. The functions of
cerebral asymmetry therefore cannot be limited to language processing.
Rather, human language likely evolved after the brain became
asymmetrical. Language simply took advantage of processes, including
the development of mirror neurons, that had already been lateralized by
natural selection in earlier members of the human lineage.
Series 1
Series 2
FIGURE 15-13 Two Arm Movement Series Subjects observe the
tester perform each sequence, then copy it as accurately as they can.
People with left-hemisphere injury, especially in the posterior parietal
region, are impaired at copying such movements.
Results
Results
The participant chooses the spoon with his left hand because
the right hemisphere sees the spoon and controls the left hand.
If the right hand is forced to choose, it will do so by chance
because no stimulus is shown to the left hemisphere.
(B) Question: What happens if both hemispheres are asked
to respond to competing information?
Procedure
Results
In this case, the right and left hands do not agree. They may
each pick up a different object, or the right hand may prevent
the left hand from performing the task.
Conclusion: Each hemisphere is capable of responding independently.
The left hemisphere may dominate in a competition, even if the
response is not verbal.
Females are generally faster at this type of test than males are.
Lateral view
Apraxia
Aphasia
KEY
Females
Males
FIGURE 15-19 Evidence
for Sex Differences in
Cortical Organization Apraxia and aphasia are associated with
frontal damage to the left hemisphere in women and with posterior
damage in men. Information from D. Kimura (1999). Sex and Cognition,
Cambridge, MA: MIT Press.
Because the left hemisphere controls the right hand, the general
assumption is that right-handedness is somehow related to the presence of
speech in the left hemisphere. If this were so, language would be located in
the right hemisphere of left-handed people. This hypothesis is easily tested,
and it turns out to be false.
In the course of preparing patients with epilepsy for surgery to remove
the abnormal tissue causing their seizures, Ted Rasmussen and Brenda
Milner (1977) injected the left or right hemisphere with sodium
amobarbital. This drug produces a short-acting anesthesia of the entire
hemisphere, making it possible to determine where speech originates. As
described in Clinical Focus 15-5 , Sodium Amobarbital Test, if a person
becomes aphasic when the drug is injected into the left hemisphere but not
when the drug is injected into the right, then speech must reside in that
person’s left hemisphere.
Rasmussen and Milner found that in virtually all right-handed people,
speech was localized in the left hemisphere, but the reverse was not true for
left-handed people. About 70 percent of left-handers also had speech in the
left hemisphere. Of the remaining 30 percent, about half had speech in the
right hemisphere and half had speech in both hemispheres. Findings from
neuroanatomical studies have subsequently shown that left-handers with
speech in the left hemisphere have asymmetries similar to those of right-
handers. By contrast, in left-handers with speech originating in the right
hemisphere or in both hemispheres—known as anomalous speech
representation —the anatomical symmetry is reversed or absent.
anomalous speech representation Condition in which a person’s
speech zones are located in the right hemisphere or in both
hemispheres.
Findings from neuroanatomical studies have subsequently shown that
left-handers with speech in the left hemisphere have asymmetries similar to
those of right-handers. By contrast, in left-handers with speech originating
in the right hemisphere or in both hemispheres—known as anomalous
speech representation —the anatomical symmetry is reversed or absent.
Findings from neuroanatomical studies have subsequently shown that
left-handers with speech in the left hemisphere have asymmetries similar to
those of right-handers. By contrast, in left-handers with speech originating
in the right hemisphere or in both hemispheres—known as anomalous
speech representation —the anatomical symmetry is reversed or absent.
Findings from neuroanatomical studies have subsequently shown that left-
handers with speech in the left hemisphere have asymmetries similar to
those of right-handers. By contrast, in left-handers with speech originating
in the right hemisphere or in both hemispheres—known as anomalous
speech representation —the anatomical symmetry is reversed or absent.
Sandra Witelson and Charlie Goldsmith (1991) asked whether any other
gross differences in the brain structure of right- and left-handers might
exist. One possibility is that the connectivity of the cerebral hemispheres
may differ. To test this idea, the investigators studied the hand preference of
terminally ill subjects on a variety of one-handed tasks. They later
performed postmortem studies of these patients’ brains, paying particular
attention to the size of the corpus callosum. They found that the callosal
cross-sectional area was 11 percent greater in left-handed and ambidextrous
(little or no hand preference) people than in right-handed people.
Whether this enlarged callosum is due to a greater number of fibers, to
thicker fibers, or to more myelin remains to be seen. If the larger corpus
callosum is due to more fibers, the difference would be on the order of 25
million more fibers. Presumably, such a difference would have major
implications for the organization of cognitive processing in left- and right-
handers.
Synesthesia
Some variations in brain organization are idiosyncratic rather than
systematic. Synesthesia is an individual’s capacity to join sensory
experiences across sensory modalities, as discussed in Clinical Focus 15-6 ,
A Case of Synesthesia. Examples include the ability to hear colors or taste
shapes. Edward Hubbard (2007) estimated the incidence of synesthesia at
about 1 in 23 people, although for most it likely is limited in scope.
synesthesia Ability to perceive a stimulus of one sense as the sensation
of a different sense, as when sound produces a sensation of color;
literally, feeling together.
Synesthesia runs in families—the family of Russian novelist Vladimir
Nabokov, for example. As a toddler, Nabokov complained to his mother
that the letter colors on his wooden alphabet blocks were all wrong. His
mother understood what he meant, because she too perceived letters and
words in particular colors. Nabokov’s son is synesthetic in the same way.
Musician–composer Stevie Wonder is a synesthete, as were music
legends Duke Ellington and Franz Liszt and Nobel Prize–winning
physicist Richard Feynman.
CLINICAL FOCUS 15-6
A Case of Synesthesia
Michael Watson tastes shapes. His sensory joining first came to the
attention of neurologist Richard Cytowic over dinner. After tasting a
sauce he was making for roast chicken, Watson blurted out, “There
aren’t enough points on the chicken.”
When Cytowic quizzed him about this strange remark, Watson said
that all flavors had shape for him. “I wanted the taste of this chicken to
be a pointed shape, but it came out all round. Well, I mean it’s nearly
spherical. I can’t serve this if it doesn’t have points” (Cytowic, 1998, p.
4).
Watson has synesthesia, which literally means feeling together. All his
life Watson has experienced the feeling of shape when he tastes or smells
food. When he tastes intense flavors, he reports an experience of shape
that sweeps down his arms to his fingertips. He experiences the feeling
of weight, texture, warmth or cold, and shape, just as though he were
grasping something.
The feelings are not confined to his hands, however. Watson
experiences some taste shapes, such as points, over his whole body. He
experiences others only on the face, back, or shoulders. These
impressions are not metaphors, as other people might use when they say
that a cheese is sharp or a wine is textured. Such descriptions make no
sense to Watson. He actually feels the shapes.
Cytowic systematically studied Watson to determine whether his
feelings of shape were always associated with particular flavors and
found that they were. Cytowic devised the set of geometric figures
shown here to allow Watson to communicate which shapes he associated
with various flavors.
If you’ve shivered on hearing a particular piece of music or the noise
of fingernails scratching across a chalkboard, you have felt sound. Even
so, other sensory blendings may be difficult to imagine. How can sounds
or letters possibly produce colors? Studies of synesthetes show that the
same stimuli always elicit the same experiences for them.
The most common form of synesthesia is colored hearing. For many
synesthetes, this means hearing both speech and music in color—
perceiving a visual mélange of colored shapes, movement, and
scintillation. The fact that colored hearing is more common than other
types of synesthesia is curious.
The five primary senses (vision, hearing, touch, taste, and smell) all
generate synesthetic pairings. Most, however, are in one direction. For
instance, whereas synesthetes may see colors when they hear, they do not
hear sounds in colors. Furthermore, some sensory combinations occur
rarely, if at all. In particular, taste or smell rarely triggers a synesthetic
response like Michael Watson’s.
Because each case is idiosyncratic, synesthesia’s neurological basis is
difficult to investigate. Few studies have related it directly to brain
function or brain organization, and different people may experience it for
different reasons. Various hypotheses have been advanced to account for
synesthesia:
• Extraordinary neural connections between the sensory regions are
related in a particular synesthete.
• Activity is increased in the frontal lobe multimodal cortex, which
receives inputs from more than one sensory area.
• Particular sensory inputs elicit unusual patterns of cerebral activation.
Whatever the explanation, when it comes to certain sensory inputs,
the brain of a synesthete certainly works differently from other people’s
brains.
15-5 REVIEW
Multiple Intelligences
Many other hypotheses on intelligence have been set forth since
Spearman’s, but few have considered the brain directly. One exception,
proposed by Howard Gardner (1983), a neuropsychologist at Harvard,
considers the effects of neurological injury on people’s behavior.
Gardner concludes that seven distinct forms of intelligence exist and that
brain injury can selectively damage any form. The idea of multiple
human intelligences should not be surprising given the varied cognitive
operations the human brain can perform.
Gardner’s seven categories of intelligence are linguistic, musical,
logical-mathematical, spatial, bodily-kinesthetic, intrapersonal, and
interpersonal. Linguistic and musical intelligence are straightforward
concepts, as is logical-mathematical intelligence. Spatial intelligence
refers to abilities discussed in this chapter, especially navigating in
space, and to the ability to draw and paint. Bodily-kinesthetic
intelligence refers to superior motor abilities, such as those exemplified
by skilled athletes and dancers.
The two types of “personal” intelligence are less obvious. They refer
to the frontal and temporal lobe operations required for success in a
highly social environment. The intrapersonal aspect is awareness of
one’s own feelings, whereas the interpersonal aspect entails recognizing
others’ feelings and responding appropriately. Gardner’s definition of
intelligence has the advantage not only of being inclusive but also of
acknowledging forms of intelligence not typically recognized by
standard intelligence tests, abilities such as theory of mind, described in
Section 15-3 .
One prediction stemming from Gardner’s analysis of intelligence is
that brains ought to differ in some way when people have more of one
form of intelligence and less of another. Logically, we could imagine that
if a person were higher in musical intelligence and lower in interpersonal
intelligence, then the brain regions for music (especially the temporal
lobe) would differ in some fundamental way from the “less efficient”
regions for interpersonal intelligence. One way to examine such
differences is to use fcMRI or DTI to identify differences in pathways, as
in the example of absolute pitch (see Figure 15-10 ).
Intelligence
Before you continue, check your understanding.
1 . Different concepts of intelligence include Spearman’s ___________,
Gardner’s ___________, Guilford’s concepts of ___________ and
thinking, and Hebb’s ___________ and ___________.
2 . Each form of intelligence that humans possess is probably related to
the brain’s ___________ organization as well as to its ___________
efficiency.
3 . No two brains are alike. They differ, for example, in ___________,
___________, and ___________.
4 . Evidence that Hebb’s intelligence A and intelligence B can be altered
by experience is evidence of ___________ influences on brain
organization.
5 . How might intelligence be related to brain activity?
Answers appear at the back of the book.
Results
Conclusion: It is possible to dissociate behavior and conscious
awareness.
Research from C. Frith, R. Perry, & E. Lumer (1999). The neural correlations of conscious experience. Trends
in Cognitive Sciences, 3, pp. 105–114.
Consciousness
Before you continue, check your understanding.
1 . Over the course of human evolution, one characteristic of sensory
processing is that it has become more ___________.
2 . ___________ is the mind’s level of responsiveness to impressions
made by the senses.
3 . As relative human brain size and complexity have increased, so too
has our degree of ___________.
4 . Not all behavior is under conscious control. What types of behaviors
are not conscious?
Answers appear at the back of the book.
16
Infection Encephalitis
Behavioral Disorders
Behavioral disorders afflict millions every year. The National Institute for
Mental Disorders estimates that in a given year about one in four people
in the United States has a diagnosable behavioral disorder, and nearly half
of the population does over their lifetime. Only a minority receive
treatment of any kind, and even fewer receive treatment from a mental
health specialist. Large-scale surveys of neurological disorders show a
similar pattern of prevalence. Together, behavioral, psychiatric, and
neurological disorders are the leading cause of disability after age 15.
Behavioral disorders, traditionally classified as social, psychological,
psychiatric, and neurological, reflect the assessment and treatment roles
different professional groups play. As understanding of brain function
increases, the lines between behavioral disorders are blurring.
Figure 8-31 pegs the peak age of onset for mental disorders at 14
years.
Bipolar and related Disorders placed between schizophrenia and depressive disorders,
disorders bridging two diagnostic classes, and characterized by periods of
extreme elation and/or significant depressive symptoms.
Anxiety disorders All feature excessive fear and anxiety but differ in objects or
situations that induce fear, anxiety, or avoidance and associated
cognitive ideation. Includes generalized anxiety disorder (GAD),
specific phobia, agoraphobia, panic disorder, separation anxiety
disorder.
Feeding and eating Abnormal patterns of eating that significantly impair physical health
disorders and/or psychosocial functioning, including pica, anorexia nervosa,
bulimia nervosa, and binge-eating disorder.
Source: Information from Diagnostic and Statistical Manual of Mental Disorders (5th
ed.), 2013. Washington, DC: American Psychiatric Association.
The primary clinical use of TMS, which the U.S. Food and Drug
Administration formally approved in 2008, is for depression.
Numerous studies report positive effects using TMS, but the required
duration of treatment and the duration of beneficial effects remains
under investigation.
The effects of brief pulses of TMS do not outlive the stimulation.
Repetitive TMS (rTMS), however, which involves continuous
stimulation for up to several minutes, produces longer-lasting effects.
What is needed to fully evaluate TMS effects in alleviating
depression is a double-blind study, in which both therapists and
patients are unaware of whether real or sham stimulation is
administered (Serafini et al., 2015).
In addition to treating depression, small but promising studies have
extended the possible benefits of TMS to schizophrenic auditory
hallucinations, anxiety disorders, neurodegenerative diseases,
hemiparesis, and pain syndrome (Wassermann & Zimmerman, 2012).
Among the problems in all studies of TMS are questions related to
the duration and intensity of stimulation and also to the area
stimulated. Each person’s brain is slightly different, so to ensure that
appropriate structures are stimulated, MRI must be performed on each
subject.
Does TMS stimulation make the brain more plastic? If so, can
learning be enhanced? The idea is, when a train of TMS is delivered,
it produces a change in cortical excitability. This change in turn
facilitates learning. Indeed, combined TMS and training can improve
the therapeutic effects of motor or cognitive training given alone
(Nevler & Ash, 2015).
Pharmacological Treatments
Several accidental discoveries, beginning in the 1950s, led to a
pharmacological revolution in the treatment of behavioral disorders:
1. The development of phenothiazines (neuroleptics) to treat
schizophrenia stemmed from a drug used to premedicate surgical
patients. In the following decades, neuroleptic drugs became
increasingly more selective, and they remain effective.
2. A new class of antianxiety drugs was invented: the anxiolytics.
Medications such as Valium quickly became—and remain—the most
widely prescribed drugs in the United States.
3. L-Dopa provided the first drug treatment for serious motor dysfunction
in Parkinson disease. Once taken, L -dopa is converted into and
replaces dopamine lost due to Parkinson disease.
The power of psychoactive drugs to change disordered behavior
revolutionized the pharmaceutical industry. The central goal is developing
drugs that can act as magic bullets to correct the chemical imbalances
found in various disorders. Research is directed toward making drugs
more selective in targeting specific disorders while producing fewer side
effects. Both goals have proved difficult to achieve.
Pharmacological treatments have significant downsides. Acute and
chronic side effects top the list, and long-term use may cause new
problems. Consider a person who receives anti-depressant medication.
The drug may ease the depression but it may also produce unwanted side
effects, including decreased sexual desire, fatigue, and sleep disturbance.
The last two effects may also interfere with cognitive functioning.
Thus, although a medication may be useful for getting a person out of
a depressed state, it may produce other symptoms that are themselves
disturbing and may complicate recovery. Furthermore, in depression
related to a person’s life events, a drug does not provide the behavioral
tools needed to cope with an adverse situation. Some psychologists say,
“A pill is not a skill.”
Section 6-2 classifies psychoactive drugs and their therapeutic
effects.
Negative side effects of drug treatments are evident in many people
whose schizophrenia is being treated with neuroleptics. Antipsychotic
drugs act on the mesolimbic dopamine system, which affects motivation,
among other functions. The side effect emerges because the drugs also act
on the nigrostriatal dopaminergic system, which controls movement.
Patients who take neuroleptics also eventually develop motor
disturbances. Tardive dyskinesia, an inability to stop the tongue, hands,
or other body parts from moving, is a motor symptom of long-term
neuroleptic administration. Side effects of movement disorders can persist
after the psychoactive medication has been stopped. Taking drugs for
behavioral disorders, then, does carry risk. Rather than magic bullets,
these medications often act like shotguns.
tardive dyskinesia Inability to stop the tongue or other body parts
from moving; motor side effect of neuroleptic drugs.
Despite their drawbacks, drugs do prove beneficial for many people.
Improved drug chemistry will reduce side effects, as will improved
delivery modes that bring a drug to a target system with minimal effects
on other systems. One improved delivery system uses nanoparticles called
liposomes, biosynthetic molecules 1 to 100 nm (nanometers, or billionths
of a meter) in size. One natural biological nanoparticle, with a radius of
about 40 nm, is the synaptic vesicle that houses a neurotransmitter for
delivery into the cell’s extracellular space. Liposomes consisting of a
synthetic vesicle with a homing peptide on the surface can, in principle,
be constructed to carry a drug across the blood–brain barrier and deliver it
to specified types of neuron or glial cells within the nervous system.
Behavioral Treatments
Treatments for behavioral disorders need not be direct biological or
medical interventions. Just as the brain can alter behavior, behavior can
alter the brain. Behavioral treatments focus on key environmental factors
that influence how a person acts. As behavior changes in response to
treatment, the brain is affected as well.
An example is treatment for generalized anxiety disorders attributed to
chronic stress. People who endure a persistently high anxiety level often
engage in maladaptive behaviors to reduce it. While they require
immediate treatment with antianxiety medication, long-term treatment
entails changing their behavior. Generalized anxiety disorder is not
simply a problem of abnormal brain activity but also of experiential and
social factors that fundamentally alter the person’s perception of the
world.
Focus 12-3 recounts a case of generalized anxiety disorder.
Perhaps you are thinking that behavioral treatments may help
somewhat in treating brain dysfunction, but the real solution must lie in
altering brain activity. Since every aspect of behavior is the product of
brain activity, behavioral treatments do act by changing brain function. If
people can change how they think and feel about themselves or some
aspect of their lives, this change has taken place because talking about
their problems or resolving a problem alters how their brain functions. In
a sense then, a behavioral treatment is a biological intervention.
Behavioral treatments may sometimes be helped along by drug treatments
that make the brain more receptive to change through behavioral therapy.
In this way, drug treatments and behavioral treatments have synergistic
effects, each helping the other to be more effective.
Your behavior is a product of all your learning and social experiences.
An obvious approach to developing a treatment is to re-create a learning
environment that replaces a maladaptive behavior with an adaptive
behavior. Thus, the various approaches to behavioral treatment use
principles derived from experiment-based learning theory. Following is a
sampling of these approaches.
Lea Paterson/Science Source
Systematic desensitization for a phobia, the most common among anxiety disorders, as Focus 12-3
reports.
Neurological Disorders
In everyone’s lifetime, at least one close friend or relative will develop a
neurological disorder, even if we ourselves escape them. Disorder causes
are understood in a general sense, and for most, rehabilitative treatment
is emerging. In this section we review some common neurological
disorders: traumatic brain injury, stroke, epilepsy, multiple sclerosis, and
neurodegenerative disorders.
Concussion
Early in 2011, 50-year-old former Chicago Bears defensive back
Dave Duerson shot himself in the chest and died. He left a note
asking that his brain be studied. Duerson had played 11 years in the
National Football League, won two Super Bowls, and received
numerous awards.
As a pro player he endured at least 10 concussions, but they did
not seem serious enough to cause him to leave the game. After
retiring from football, he went to Harvard and obtained a business
degree. He pursued a successful business career until he began to
have trouble making decisions and controlling his temper.
Eventually, Duerson’s business and marriage failed. After his
suicide, the Center for the Study of Traumatic Encephalopathy in
Boston did study his brain. The Center is conducting postmortem
anatomical analyses of the brains of former athletes as part of a long-
term longitudinal study.
FIGURE 16-6 Mechanics of TBI Pink and blue shading mark brain
regions most frequently damaged in closed-head injury. A blow can
produce a contusion both at the site of impact and on the opposite side of
the brain, owing to rebound compression.
More generalized impairment results from minute lesions and
lacerations scattered throughout the brain. Movement of the hemispheres
in relation to one another causes tearing characterized by a loss of
complex cognitive functions, including mental speed, concentration, and
overall cognitive efficiency.
TBI patients generally complain of poor concentration or lack of
ability. They fail to do things as well as they could before the injury, even
though their intelligence is unimpaired. In fact, in our experience, people
with high skill levels seem to be the most affected by TBI, in large part
because they are acutely aware of loss of a skill that prevents them from
returning to their former competence level.
Traumatic brain injury that damages the frontal and temporal lobes
also tends to significantly affect personality and social behavior. Few
victims of traffic accidents who have sustained severe head injuries ever
resume their studies or return to gainful employment. If they do reenter
the work force, they do so at a lower level than before their accident.
One frustrating problem with traumatic brain injury is misdiagnosis:
chronic effects of injuries often are unaccompanied by any obvious
neurological signs or abnormalities in CT or MRI scans. Patients may
therefore be referred for psychiatric or neuropsychological evaluation.
MRI-based imaging techniques such as magnetic resonance
spectroscopy (MRS), however, are useful for accurate TBI diagnosis
(Reis et al., 2015).
magnetic resonance spectroscopy (MRS) Modification of MRI to
identify changes in specific markers of neuronal function; promising
for accurate diagnosis of traumatic brain injuries.
MRS, a modification of MRI, can identify changes in specific markers
of neuronal function. One such marker is N -acetylaspartate (NAA), the
second most abundant amino acid in the human brain. Assessing the level
of NAA expression provides a measure of neuronal integrity, and
deviations from normal levels (up or down) can be taken as a marker of
abnormal brain function. People with traumatic brain injury show a
chronic decrease in NAA that correlates with the severity of the injury.
Although not yet in wide clinical use, MRS is a promising tool, not only
for identifying brain abnormalities but also for monitoring cellular
response to therapeutic interventions.
Section 7-3 introduces the MRS technique.
Recovery from Traumatic Brain Injury
Recovery from head trauma may continue for 2 to 3 years and longer, but
most cognitive recovery occurs in the first 6 to 9 months. Recovery of
memory functions appears to be slower than recovery of general
intelligence, and the final level of memory performance is lower than for
other cognitive functions. People with brainstem damage, as inferred
from oculomotor disturbance, have a poorer cognitive outcome, and a
poorer outcome is probably true of people with initial dysphasias or
hemiparesis as well.
Although the prognosis for significant recovery of cognitive functions
is good, optimism about the recovery of social skills or personality traits,
areas that often show significant change, is less rosy. Findings from
numerous studies support the conclusions that quality of life—in social
interactions, perceived stress levels, and enjoyment of leisure—is
significantly reduced after TBI and that this reduction is chronic.
Attempts to develop tools to measure changes in psychosocial adjustment
in brain-injured people are few, so we must rely largely on subjective
descriptions and self-reports. Neither provides much information about
the specific causes of these problems (Block et al., 2015).
Stroke
Diagnosticians may be able to point to a specific immediate cause of
stroke, an interruption of blood flow from either blockage of a vessel or
bleeding from a vessel. This initial event, however, merely sets off a
sequence of damage that progresses, even if the blood flow is restored.
Stroke results in a lack of blood, called ischemia, followed by a cascade
of cellular events that wreak the real damage. Changes at the cellular
level can seriously compromise not only the injured part of the brain but
other brain regions as well.
ischemia Lack of blood to the brain, usually as a result of a stroke.
Focus 2-3 describes the symptoms and aftereffects of stroke.
Effects of Stroke
Consider what happens after a stroke interrupts the blood supply to a
cerebral artery. In the first seconds to minutes after ischemia, as
illustrated in Figure 16-7 , changes begin in the affected regions’ ionic
balance, including changes in pH and in the properties of the cell
membrane. These ionic changes result in several pathological events.
1. Release of massive amounts of glutamate results in prolonged opening
of calcium channels in cell membranes.
2. Open calcium channels in turn allow toxic levels of calcium to enter
the cell, not only producing direct toxic effects but also instigating
various second-messenger pathways that can harm neurons. In the
ensuing minutes to hours, mRNA is stimulated, altering protein
production in the neurons and possibly proving toxic to the cells.
Figure 5-4 shows how calcium affects neurotransmitter release;
Figure 5-15 , how metabotropic receptors can activate second
messengers.
3. Brain tissues become inflamed and swollen, threatening the integrity of
cells that may be far removed from the stroke site. As in TBI, an energy
crisis ensues as mitochondria reduce their production of ATP, resulting
in less cerebral energy.
4. A form of neural shock occurs. During this diaschisis, areas distant
from the damage are functionally depressed. Thus, not only are local
neural tissue and its function lost but areas related to the damaged
region also undergo a sudden withdrawal of excitation or inhibition.
diaschisis Neural shock that follows brain damage in which areas
connected to the site of damage show a temporary arrest of function.
5. Stroke may also be followed by changes in the injured hemisphere’s
metabolism, its glucose utilization, or both. These changes may persist
for days. As with diaschisis, the metabolic changes can severely affect
the functioning of otherwise healthy tissue. For example, after a
cortical stroke, metabolic rate has been shown to decrease about 25
percent throughout the hemisphere.
Treatments for Stroke
The ideal treatment is to restore blood flow in blocked vessels before the
cascade of nasty events begins. One clot-busting drug is tissue
plasminogen activator (t-PA), but t-PA must be administered within 3 to 5
hours to be effective. Currently, only a small percentage of stroke patients
arrive at the hospital soon enough, in large part because stroke is not
quickly identified, transportation is slow, or the stroke is not considered
an emergency.
FIGURE 16-7 Results of Ischemia A cascade of events takes place after blood
flow is blocked as a result of stroke. Within seconds, ionic changes at the
cellular level spur changes in second-messenger molecules and RNA
production. Changes in protein production and inflammation follow and
resolve slowly, in hours to days. Recovery begins within hours to days and
continues for weeks to months or years.
A rms Check if one arm is weak by asking the person to raise both arms.
T ime If you see any symptom, call 911 or the local emergency services number right
away.
When the course of the stroke leads to dead brain tissue, the only
treatments that can be beneficial are those that facilitate plastic changes in
the remaining brain. Examples are speech therapy and physical therapy.
Revolutionary approaches to stroke rehabilitation use virtual reality,
computer games, and robotic machines (Laver et al., 2015).
Still, some simple treatments are surprisingly effective. One is
constraint-induced therapy, pioneered by Edward Taub in the 1990s
(Kawakkel et al., 2015). Its logic confronts a problem in poststroke
recovery related to learned nonuse. Stroke patients with motor deficits in
a limb often compensate by overusing the intact limb, which in turn leads
to increased loss of use in the impaired limb.
Experiment 11-3 describes research with monkeys that contributed to
developing constraint-induced therapy for people.
In constraint-induced therapy, the intact limb is held in a sling for
several hours per day, forcing the patient to use the impaired limb.
Nothing about the procedure is magical: virtually any treatment that
forces patients to practice behaviors extensively is successful. An
important component of these treatments, however, is a posttreatment
contract in which the patients continue to practice after the formal therapy
is over. If they fail to do so, the chances for learned nonuse and a return
of symptoms are high.
Another common effect of stroke is loss of speech. Specific speech
therapy programs can aid in the recovery of speech. Music and singing,
mediated in part by the right hemisphere, can augment speech therapy
after left hemisphere stroke.
Therapies using pharmacological interventions (e.g., noradrenergic,
dopaminergic, cholinergic agonists) combined with behavioral therapies
provide equivocal gains in stroke patients. The bulk of evidence suggests
that patients with small gray matter strokes are most likely to show
benefits from these treatments, whereas those with large strokes that
include white matter show little benefit.
Finally, there have been many attempts to use either direct cortical
stimulation or TMS in combination with behavioral therapy as a stroke
treatment. The idea is to induce plasticity in regions adjacent to the dead
tissue with the goal of enhancing the efficiency of the residual parts of the
neuronal networks. These treatments have proved beneficial in patients
with good residual motor control, but again, those with larger injuries
show much less benefit, presumably because the residual neuronal
network is insufficient.
Epilepsy
Epilepsy is characterized by recurrent seizures, which register on an
electroencephalogram (EEG) as highly synchronized neuronal firing
indicated by a variety of abnormal waves. About 1 person in 20 has at
least one seizure in his or her lifetime, usually associated with an
infection, temperature, and hyperventilation during childhood (6 months
to 5 years of age). Most children who experience a seizure do not develop
epilepsy, which affects between 0.2 percent and 4.1 percent of the
population. Developed nations record a lower prevalence and incidence
of epilepsy compared with developing nations.
Focus 4-1 describes a diagnosis of epilepsy and shows an EEG being
recorded.
Classifying Seizures
Causes of epileptic seizures are categorized as genetic,
structural/metabolic, or unknown. Genetic epilepsy results directly from a
known genetic defect. Causes of structural/metabolic epilepsy include
brain malformations and tumors, acquired disorders such as stroke and
trauma, and infections. The unknown category encompasses causes yet to
be identified. Table 16-4 summarizes the great variety of circumstances
that appear to precipitate a seizure. The range of circumstances is striking,
but seizures do have a consistent feature: they are most likely to occur
when a person is sleeping.
Simon Fraser/Royal Victoria Infirmary/Newcastle Upon Tyne/Science Photo
Library/Science Source
Structural or metabolic epilepsy may result from brain malformations such
as angioma, or AV malformation, shown on this dorsal view MRI. Abnormal
cerebral blood vessels (in white) form a balloonlike structure (blue area at
lower right) that caused the death of brain tissue around it in the right
occipital cortex.
Alcohol
Analeptics
Excessive anticonvulsants
Phenothiazines
Tricyclic antidepressants
Emotional stress
Fever
Hormonal changes
Adrenal steroids
Menses
Puberty
Hyperventilation
Sensory stimuli
Flashing lights
Laughing
Sleep
Sleep deprivation
Trauma
Treating Epilepsy
The first-line treatment for epilepsy is drugs to inhibit seizure
development and propagation. Among the diverse range of mechanisms
drugs employ to raise seizure thresholds are enhancing the action of the
inhibitory neurotransmitter GABA and stabilizing the inactive state of
sodium channels. Most people with epilepsy work with their physician to
find a drug and dosage that cause few side effects. In 30 percent to 40
percent of people with epilepsy, however, antiseizure drugs fail to
completely control the condition—intractable epilepsy. The most
common treatment for intractable epilepsy in adults is surgical resection
of epileptogenic tissue, which has a success rate of about 70 percent.
Deep-brain stimulation may prove useful for intractable epilepsy:
bilateral stimulation of the anterior thalamus has been successful in
reducing seizure frequency. While DBS shows promise, more research is
needed (Ostergard & Miller, 2014).
FIGURE 16-8 Generalized Seizure Patterns Examples of EEG
patterns recorded during a generalized seizure. Dots on the hemispheres
below indicate the approximate recording sites. Column numbers mark the
seizure’s stages: (1) normal record before the attack; (2) onset of seizure
and tonic phase; (3) clonic phase; and (4) a period of depressed EEG
activity after the seizure ends. Abbreviations: LT and RT, left and right
temporal; LF and RF, left and right frontal; LO and RO, left and right
occipital.
Multiple Sclerosis
In multiple sclerosis (MS), the myelin that encases axons is damaged and
neuronal functions are disrupted. MS is characterized by myelin loss in
both motor and sensory tracts and nerves. The oligodendroglia that form
the myelin sheath, and in some cases the axons, are destroyed. Brain
imaging with MRI, as shown in Figure 16-9 , identifies areas of sclerosis
in the brain as well as in the spinal cord.
Remission followed by relapse is a striking feature of MS: in many
cases, early symptoms initially are followed by improvement. The course
varies, running from a few years to as long as 50 years. Paraplegia, the
classic feature of MS, may eventually confine the affected person to bed.
Worldwide, about 1 million people have MS; women outnumber men
about two to one. Multiple sclerosis is most prevalent in northern Europe
and northern North America, rare in Japan and in more southerly or
tropical countries. Depending on region, incidence of MS ranges from 2
to 150 per 100,000 people, making it one of the most common structural
nervous system diseases.
The cause or causes of MS are unknown. Proposed causes include
bacterial infection, a virus, environmental factors including pesticides, an
immune response of the central nervous system, misfolded proteins, and
lack of vitamin D. Often, multiple cases occur in a single family. Many
genes have been associated with MS, but no clear evidence indicates as
yet that MS is inherited or transmitted from one person to another.
Neurodegenerative Disorders
Human societies have never before undergone the age-related
demographic shifts now developing in North America and Europe. Since
1900, the percentage of older people has increased steadily. In 1900,
about 4 percent of the population had attained 65 years of age. By 2030,
about 20 percent will be older than 65—about 50 million people in the
United States alone.
Dementias affect 1 percent to 6 percent of the population older than
age 65 and 10 percent to 20 percent of those older than age 80. For every
person diagnosed with dementia, it is estimated that several others endure
undiagnosed cognitive impairments that affect their quality of life.
Currently, more than 6 million people in the United States have a
dementia diagnosis, a number projected to rise to about 15 million by
2050. By then, 1 million new U.S. cases per year will be emerging.
Extending these projections across the rest of the developed world
portends staggering social and economic costs (Khachaturian &
Khachaturian, 2015). The World Health Organization estimates that by
2050, the incidence of dementia will balloon to 135.5 million people
worldwide.
Types of Dementia
Dementia is an acquired and persistent syndrome of intellectual
impairment. Its two essential features are (1) loss of memory and other
cognitive deficits and (2) impairment in social and occupational
functioning. Dementia is not a singular disorder, but there is no clear
agreement on how to split up subtypes. Daniel Kaufer and Steven
DeKosky (1999) divide dementias into the broad categories of
degenerative and nondegenerative ( Table 16-5 ).
dementia Acquired and persistent syndrome of intellectual
impairment characterized by memory and other cognitive deficits
and impairment in social and occupational functioning.
Nondegenerative dementias, a heterogeneous group of disorders with
diverse causes, including diseases of the vascular or endocrine systems,
inflammation, nutritional deficiency, and toxins, are summarized on the
right in Table 16-5 . The most prevalent cause is vascular. The most
significant risk factors for nondegenerative dementias are chronic
hypertension, obesity, sedentary lifestyle, smoking, and diabetes. All are
as well risk factors for cardiovascular disease. Degenerative dementias,
listed on the left in the table, presumably have a degree of genetic
transmission.
We now review two degenerative dementias, Parkinson and Alzheimer
diseases. Both pathological processes are primarily intrinsic to the
nervous system, and both tend to affect certain neural systems selectively.
TABLE 16-5 Degenerative and nondegenerative Dementias
Degenerative Nondegenerative
Parkinson Disease
Parkinson disease is common. Estimates of its incidence vary up to 1.0
percent of the population, rise sharply in old age, and are certain to grow
in coming decades. The disease seems related to degeneration of the
substantia nigra and attendant loss of the neurotransmitter dopamine
produced there and released in the striatum. The disease therefore offers
insight into the roles played by the substantia nigra and dopamine in
movement control.
Figure 7-5 diagrams degeneration in the substantia nigra associated
with Parkinson symptoms.
That Parkinson symptoms vary enormously illustrates the complexity
inherent in understanding a neurological disorder. A well-defined set of
cells degenerates, yet the symptoms are not the same in every patient.
Many symptoms strikingly resemble changes in motor activity that occurs
as a consequence of aging. Thus Parkinson disease offers indirect insight
into more general problems of neural changes in aging.
Symptoms begin insidiously, often with a tremor in one hand and
slight stiffness in distal parts of the limbs. Movements may become
slower, the face becoming masklike with loss of eye blinking and poverty
of emotional expression. Thereafter the body may stoop and the gait
become a shuffle, with arms hanging motionless at the sides. Speech may
slow and become monotonous, and difficulty swallowing may cause
drooling.
Although the disease is progressive, the rate at which symptoms
worsen varies; only rarely is progression so rapid that a person becomes
disabled within 5 years. Usually 10 to 20 years elapse before symptoms
cause incapacity. A distinctive aspect of Parkinson disease is its on-
again–off-again quality: symptoms may appear suddenly and just as
suddenly disappear.
Partial remission may also occur in response to interesting or
stimulating situations. Neurologist Oliver Sacks (1998) recounted an
incident in which a stationary Parkinson patient leaped from his
wheelchair at the seaside and rushed into the breakers to save a drowning
man, only to fall back into his chair immediately afterward and become
inactive again. Remission of some symptoms in activating situations is
common but usually not as dramatic as this case. Simply listening to
familiar music can help an otherwise inactive patient get up and dance,
for example. Or a patient who has difficulty walking may ride a bicycle
or skate effortlessly. Such activities can be used as physical therapy, and
physical therapy is important because it may slow disease progression.
Sacks, whose writings enriched the neurological literature beyond
measure, died in 2015.
The four major symptoms of Parkinson disease are tremor, rigidity,
loss of spontaneous movement (hypokinesia ), and postural disturbances.
Each symptom may manifest in different body parts in different
combinations. Because some symptoms entail the appearance of
abnormal behaviors (positive symptoms) and others the loss of normal
behaviors (negative symptoms), we consider both major categories.
Positive symptoms are behaviors not typically seen in people.
Negative symptoms are the absence of typical behaviors or inability
to engage in an activity.
POSITIVE SYMPTOMS Because positive symptoms are common in
Parkinson disease, they are thought to be inhibited, or held in check, in
unaffected people but released from inhibition in the process of the
disease. Following are the three most common:
1. Tremor at rest. Alternating movements of the limbs occur when they
are at rest and stop during voluntary movements or sleep. Hand tremors
often have a pill-rolling quality, as if a pill were being rolled between
the thumb and forefinger.
2. Muscular rigidity. Increased muscle tone simultaneously in both
extensor and flexor muscles is particularly evident when the limbs are
moved passively at a joint. Movement is resisted, but with sufficient
force the muscles yield for a short distance then resist movement again.
Thus, complete passive flexion or extension of a joint occurs in a series
of steps, giving rise to the term cogwheel rigidity. Rigidity may be
severe enough to make all movements difficult—like moving in slow
motion and being unable to speed up the process.
3. Involuntary movements. Small movements or changes in posture,
sometimes referred to as akathesia, or cruel restlessness, may
accompany general inactivity to relieve tremor and sometimes to
relieve stiffness but often occurs for no apparent reason. Other
involuntary movements are distortions of posture, such as occur during
oculogyric crisis (involuntary turns of the head and eyes to one side),
which last minutes to hours.
akathesia Small, involuntary movements or changes in posture;
motor restlessness.
NEGATIVE SYMPTOMS After detailed analysis of negative
symptoms, Jean Prudin Martin (1967) divided patients severely affected
with Parkinson disease into five groups:
1. Disorders of posture. A disorder of fixation presents as an inability or
difficulty in maintaining a part of the body in its normal position in
relation to other parts. A person’s head may droop forward or a
standing person may gradually bend forward, ending up on the knees.
Disorders of equilibrium. These disorders cause difficulties in standing
or even sitting unsupported. In less severe cases, people may have
difficulty standing on one leg, or if pushed lightly on the shoulders,
they may fall passively without taking corrective steps or attempting to
catch themselves.
2. Disorders of righting. A person in a supine position has difficulty
standing. Many advanced patients have difficulty in even rolling over.
3. Disorders of locomotion. Normal locomotion requires support of the
body against gravity, stepping, balancing while the weight of the body
is transferred from one leg to the other, and pushing forward. Parkinson
patients have difficulty initiating stepping. When they do walk, they
shuffle with short footsteps on a fairly wide base of support because
they have trouble maintaining equilibrium when shifting weight from
one leg to the other. On beginning to walk, Parkinson patients often
demonstrate festination: they take faster and faster steps and end up
running forward.
festination Tendency to engage in a behavior, such as walking, faster
and faster.
4. Speech disturbances. One symptom most noticeable to relatives is the
almost complete absence of prosody (rhythm and pitch) in the
speaker’s voice.
5. Hypokinesia. Poverty or slowness of movement may also manifest
itself in a blankness of facial expression, a lack of blinking or of
swinging the arms when walking, a lack of spontaneous speech, or an
absence of normal fidgeting. Akinesia also manifests in difficulty
making repetitive movements, such as tapping, even in the absence of
rigidity. People who sit motionless for hours show hypokinesia in its
most striking manifestation.
COGNITIVE SYMPTOMS Although Parkinson disease is usually
viewed as a motor disorder, changes in cognition occur as well.
Psychological symptoms in Parkinson patients are as variable as the
motor symptoms. Nonetheless, a significant percentage of patients show
cognitive symptoms that mirror their motor symptoms.
Oliver Sacks (1998) reported impoverishment of feeling, libido,
motive, and attention: people may sit for hours, apparently lacking the
will to begin or continue any activity. Thinking seems generally to be
slowed and is easily confused with dementia because patients do not
appear to be processing the content of conversations. In fact, they may be
simply processing very slowly.
Cognitive slowing in Parkinson patients has some parallels to
Alzheimer disease.
CAUSES OF PARKINSONISM The ultimate cause of Parkinson
disease—loss of cells in the substantia nigra—may result from disease,
such as encephalitis or syphilis, from drugs such as MPTP, or from
unknown causes. Idiopathic causes—those related to the individual—may
include environmental pollutants, insecticides, and herbicides.
Demographic studies of patient admissions in the cities of Vancouver,
Canada, and Helsinki, Finland, show an increased incidence of patients
contracting the disease at ages younger than 40. This finding has
prompted the suggestion that water and air might contain environmental
toxins that work in a fashion similar to MPTP (1-methyl-4-
phenylpyridinium), a contaminant that has been found in synthetic heroin
and causes Parkinson disease.
Actor Michael J. Fox, a native of Canada pictured in Focus 5-2, was
diagnosed with young-onset Parkinson disease at age 30.
TREATING PARKINSON DISEASE The cure for Parkinson disease is
either to stop degeneration in the substantia nigra or to replace it. Neither
goal is achievable at present. Thus, current treatment is pharmacological
and directed toward support and comfort.
Psychological factors influence Parkinsonism’s major symptoms:
outcome is affected by how well a person copes. Patients should seek
behaviorally oriented treatment early—counseling on the meaning of
symptoms, the nature of the disease, and the potential for most patients to
lead long, productive lives. Physical therapy consists of simple measures,
such as heat and massage, to alleviate painful muscle cramps as well as
training and exercise to cope with debilitating movement changes. Used
therapeutically, music and exercise can improve other aspects of
behavior, including balance and walking, and may actually slow the
course of the disease.
The prime objective of pharmacological treatment is increasing the
activity in whatever dopamine synapses remain. L -Dopa, a precursor of
dopamine, is converted into dopamine in the brain and enhances effective
dopamine transmission, as do drugs such as amantadine, amphetamine,
monoamine oxidase inhibitors, and tricyclic antidepressants.
Anticholinergic drugs, such as atropine, scopolamine, benztropine
(Cogentin), and trihexyphenidyl (Artane), block the brain cholinergic
systems that seem to show heightened activity in the absence of adequate
dopamine activity. As the disease progresses, drug therapies become less
effective, and the incidence of side effects increases. Some drug
treatments that directly stimulate dopamine receptors have been reported
to result in increased sexuality and an increased incidence of compulsive
gambling.
Two surgical treatments described in Section 16-2 are based on the
idea that increased activity of globus pallidus neurons inhibits motor
function. A lesion of the internal part of the globus pallidus (GPi) can
reduce rigidity and tremor. Hyperactivity of GPi neurons can also be
reduced neurosurgically by electrically stimulating the neurons via deep
brain stimulation (see Figure 16-4 ). A stimulating electrode is
permanently implanted in the GPi or an adjacent area, the subthalamic
nucleus. Patients carry a small electric stimulator that they can turn on to
induce DBS and so reduce rigidity and tremor. These two treatments may
be used sequentially: when DBS becomes less effective as the disease
progresses, a GPi lesion may be induced.
A promising prospective Parkinson treatment involves increasing the
population of dopamine-producing cells. The simplest way is to
transplant embryonic dopamine cells into the basal ganglia. In the 1980s
and 1990s, this treatment reported varying degrees of success marked by
poorly conducted studies with inadequate preassessment and
postassessment procedures. A newer treatment course proposes either
transplanting stem cells that could then be induced to take a dopaminergic
phenotype or stimulating endogenous stem cells to migrate to the basal
ganglia. The advantage is that these stem cells need not be derived from
embryonic tissue but can come from a variety of sources, including the
person’s own body.
Figure 11-13 charts how the GPi , a structure in the basal ganglia,
regulates movement force.
All these treatments are experimental (Politis & Lindvall, 2012).
Before cell replacement will become a useful therapy, many questions
must be resolved, including which cell source is best, where in the brain
to put grafts, and how new cells can be integrated into existing brain
circuits. Stem cells are not a quick fix for Parkinson disease, but the
pioneering work on this disease will be instrumental in applying such
technology to other diseases.
Anatomical Correlates of Alzheimer Disease
Given the increasing population of elderly people and thus of Alzheimer
disease, which accounts for about 65 percent of all dementias, research is
directed toward potential causes. Personal lifestyle, environmental toxins,
high levels of trace elements such as aluminum in the blood, an
autoimmune response, a slow-acting virus, reduced blood flow to the
cerebral hemispheres, and genetic predisposition are targets of ongoing
research.
Incidence of Alzheimer disease is high in some families, making
genetic causes pertinent to understanding disease progression. Risk
factors include the presence of the Apoe4 gene, below-average IQ score,
poor education, and TBI. Presumably, better educated and/or more
intelligent people and those who carry the Apoe2 gene are better able to
compensate for cell death in degenerative dementia.
A decade ago, the only way to identify and study Alzheimer disease
was postmortem pathology examination. This approach was less than
ideal because determining which brain changes came early in the disease
and which resulted from those early changes was impossible.
Nonetheless, it became clear that widespread changes take place in
neocortex and allo-cortex and that associated changes take place in many
neurotransmitter systems. Most of the brainstem, cerebellum, and spinal
cord are relatively spared from Alzheimer’s major ravages.
The principal neuroanatomical change in Alzheimer disease is the
emergence of amyloid plaques (clumps of protein from dead neurons and
astrocytes), chiefly in allocortex and neo-cortex. Increased plaque
concentration in the cortex has been correlated with the magnitude of
cognitive deterioration. Plaques are generally considered nonspecific
phenomena in that they can be found in non-Alzheimer patients and in
dementias caused by other known events.
Focus 14-3, on Alzheimer etiology, includes a micrograph of an
amyloid plaque.
Another anatomical correlate of Alzheimer disease is neurofibrillary
tangles (accumulations of microtubules from dead cells) found in both
neocortex and allocortex, where the posterior half of the hippocampus is
affected more severely than the anterior half. Neurofibrillary tangles have
been described mainly in human tissue and have also been observed in
patients with Down syndrome and Parkinson disease and other dementias.
Neurofilaments are a type of tubule that reinforces cell structure, aids
its movement, and transports proteins.
Finally, neocortical changes that correlate with Alzheimer disease are
not uniform. As Figure 16-10 shows plainly, the cortex atrophies and can
lose as much as one-third of its volume as the disease progresses. But
cellular analyses at the microscopic level reveal that some areas,
including the primary sensory and motor cortices, especially the visual
and sensorimotor cortex, are relatively spared. The frontal lobes are less
affected than is the posterior cortex.
(A) Healthy brain
The best-studied similarity between the two diseases is the Lewy body
( Figure 16-12 ), a fibrous ring that forms within neuronal cytoplasm and
is thought to correspond to abnormal neurofilament metabolism. Until
recently, the Lewy body was most often found in the region of the
midbrain substantia nigra and believed to be a hallmark of Parkinson
disease, as amyloid plaques were viewed as a marker of Alzheimer
disease. In fact, Lewy bodies appear in several neurodegenerative
disorders, including Alzheimer disease. There are even reports of people
with Alzheimerlike dementias who have no plaques and tangles but do
have extensive Lewy bodies in the cortex.
Lewy body Circular fibrous structure found in several
neurodegenerative disorders; forms within the cytoplasm of neurons
and is thought to result from abnormal neurofilament metabolism.
Alzheimer and Parkinson symptoms may be similar because both
diseases have similar origins. Indeed, the idea that several diseases
marked by brain degeneration—including Huntington disease, MS, and
ALS—may have a similar origin is central to prion theory, advanced in
1982 by Stanley B. Prusiner, who received the Noble Prize for his work
in 1997. Its name derived from the terms protein and infection, a prion is
an abnormally folded protein that causes progressive neurodegeneration.
prion From protein and infection, an abnormally folded protein that
causes progressive neurodegenerative disorders.
Infection as referred to in the definition is not one caused by casual
contact.
Prions were identified during investigation of various degenerative
brain diseases in humans and other animals. Creutzfeldt-Jakob disease, a
rare human degenerative disease that progresses rapidly, gained public
prominence in the 1990s. People were contracting a similar condition
after eating beef from cattle that had displayed symptoms of bovine
spongiform encephalopathy (BSE), a degenerative brain condition
accompanied by muscle wasting. BSE in turn is similar to a condition in
sheep called scrapies (the animals scrape or scratch themselves) that also
features wasting of brain and body. Chronic wasting disease is a similar
condition found in deer and elk.
The infectious nature of such conditions was observed in the Fore tribe
of Papua New Guinea in the 1950s. A large number of Fore were dying of
a muscle-wasting condition called kuru (meaning to shake in Fore). The
Fore contracted kuru by practicing ritual cannibalism: they ate the brains
of dead relatives, which the Fore believed preserves their spirits. In an
experiment to investigate the condition, body parts from kuru victims
were fed to a chimp that contracted the disease.
The infectious agent in these conditions is a prion. Prion proteins are
found in healthy cell membranes and may play a role in attaching one cell
to another. Prion proteins also bind to metallic ions, for example, copper.
The proteins typically fold in a normal configuration but can also misfold
(Figure 16-13 ). The altered configuration causes disease.
A misfolded prion protein will attach to a healthy prion protein and
cause it to misfold. Misfolded prions tend to clump together, forming
protein aggregates that eventually result in cell death. Misfolded prions
can also infect neighboring brain and body cells, resulting in general
brain degeneration and muscle wasting. An infectious prion can pass
from one individual to another, and even from one species to another, but
only if the normal prion proteins in the two individuals are similar.
Investigations show that among several alleles of the gene that produces
normal prion proteins, some are more susceptible to misfolding than are
others. Individuals with alleles that are not susceptible to misfolding, as
are those Fore tribespeople who did not contract kuru, are resistant to
prion disease.
© Mayo Foundation for Medical Education and Research. All r ights reserved.
FIGURE 16-13 Disease Process Left: Prion proteins typically fold into
helixes and pleated sheets. Right: In misfolding, part of the protein helix changes
into a sheet. The result is an infectious disease–causing prion.
Neurological Disorders
The DSM-5 summarizes a wide range of mental disorders. We focus on
the three general behavioral categories—psychoses, mood disorders, and
anxiety disorders—that are the best studied and understood. Figure 16-
14 summarizes their prevalence. Personal, family, and social costs are
not reflected in these statistics. Schizophrenia, bipolar disorder, and
major depression affect a smaller number of people, but their costs, in
loss of social relationships, productivity, and medical care, are
disproportionate.
(A)
Organized (healthy) pyramidal neurons
(B)
Mood Disorders
The DSM-5 identifies a continuum of affective disorders, separating
bipolar and related disorders from depressive disorders (see Table 16-3 ).
The bipolar category is positioned between schizophrenia and depressive
disorders, forming a bridge between the diagnostic classes that takes into
account symptoms, family history, and genetics. Depression and mania
—our principal interests here—represent the extremes of affect. The
main symptoms of major depression are prolonged feelings of
worthlessness and guilt, disruption of normal eating habits, sleep
disturbances, a general slowing of behavior, and frequent thoughts of
suicide.
Focus 6-3 explains the threat of suicide attendant to untreated major
depression.
Mania, the opposite affective extreme from depression, is
characterized by excessive euphoria, which the subject perceives as
typical. The affected person often formulates grandiose plans and is
uncontrollably hyperactive. Periods of mania often change, sometimes
abruptly, to depression and back again to mania, a condition designated
as bipolar disorder.
mania Disordered mental state of extreme excitement.
bipolar disorder Mood disorder characterized by periods of
depression alternating with normal periods and periods of intense
excitation, or mania.
Neurobiology of Depression
Brain and environment both contribute to the complex neurobiology of
depression. Predisposing factors related to brain anatomy and chemistry
thus may contribute more to affective changes in some people, whereas
life experiences contribute mainly to affective changes in other people.
As a result, a bewildering number of life, health, and brain factors have
been related to depression. These factors include economic or social
failure, circadian rhythm disruption, vitamin D and other nutrient
deficiency, pregnancy, brain injury, diabetes, cardiovascular events, and
childhood abuse, among many others.
A major approach in neurobiological studies of depression is to ask
whether a common brain substrate exists for depression. Antidepressant
drugs acutely increase the synaptic levels of norepinephrine and
serotonin, a finding that led to the idea that depression results from
decreased availability of one or both neurotransmitters. Lowering their
levels in healthy participants does not produce depression, however. And
while antidepressant medications increase the level of norepinephrine
and serotonin within days, it takes weeks for drugs to start relieving
depression.
Among the various explanations suggested for these confounding
results, none is completely satisfactory. Ronald Duman (2004) reviewed
evidence to suggest that antidepressants act, at least in part, on signaling
pathways, such as on cAMP, in the postsynaptic cell. Neurotrophic
factors appear to affect antidepressant action and may underlie the
neurobiology of depression. Investigators know, for example, that brain-
derived neurotrophic factor (BDNF) is down-regulated by stress and up-
regulated by antidepressant medication (Wang et al., 2012).
Section 14-4 explores the relation of hormones, trophic factors, and
psychoactive drugs to neuroplasticity.
Given that BDNF acts to enhance the growth and survival of cortical
neurons and synapses, BDNF dysfunction may adversely affect
norepinephrine and serotonin systems through the loss of either neurons
or synapses. Antidepressant medication may increase BDNF release
through its actions on cAMP. Key here is that the cause of depression
probably is not merely a simple decrease in transmitter levels. Many
brain changes are related to depression.
Mood and Reactivity to Stress
A significant psychological factor in understanding depression is
reactivity to stress. Monoamines—the noradrenergic and serotoninergic
activating systems diagrammed in Figure 16-16 A —modulate hormone
secretion by the hypothalamic–pituitary–adrenal system—the HPA axis
—illustrated in Figure 16-16 B. When we are stressed, the HPA axis is
stimulated to secrete corticotropin-releasing hormone, which stimulates
the pituitary to produce adrenocorticotropic hormone (ACTH). ACTH
circulates through the blood and stimulates the adrenal medulla to
produce cortisol. Normally, cortisol helps us deal with stress. If we
cannot cope, or if stress is intense, excessive cortisol can wield a
negative influence on the brain, damaging the feedback loops the brain
uses to turn off the stress response.
HPA axis Hypothalamic–pituitary–adrenal circuit that controls the
production and release of hormones related to stress.
Section 6-5 explains the neurobiology of the stress response—how it
begins and ends.
Excessive stress early in life may be especially detrimental. During
critical periods in early childhood, abuse or other severe environmental
stress can permanently disrupt HPA axis reactivity: it becomes constantly
overactive. Overactivity in the HPA axis results in oversecretion of
cortisol, an imbalance associated with depression in adulthood. Patrick
McGowan and colleagues (2010) wondered if early experiences could
alter gene expression related to cortisol activity in the HPA axis. They
compared, postmortem, hippocampi obtained from suicide victims with a
history of childhood abuse and hippocampi from other suicide victims
with no childhood abuse or from controls. Abused suicide victims
showed decreased gene expression for cortisol receptors relative to the
controls. These results, derived from epigenetics, confirm that early
neglect or abuse alters the HPA axis for life.
This research confirms studies on the effects of stress on
hippocampal function reported in Sections 6-5 , 7-5 , and 8-4 .
(A)
Noradrenergic system
(B)
Serotonergic system
FIGURE 16-16 Stress-Activating System (A ) Medial view showing that in the
brainstem, cell bodies of noradrenergic (norepinephrine) neurons emanate from the
locus coeruleus (top) and cell bodies of the serotonergic activating system emanate
from the raphe nuclei (bottom). (B ) When activated, the HPA axis affects mood,
thinking, and indirectly, cortisol secretion by the adrenal glands. HPA deactivation begins
when cortisol binds to hypothalamic receptors.
Neuronal degeneration
Images from S. C. Spanswick and R. J. Sutherland (2010). Object-context specific memory
deficits associated with loss of hippocampal granule cells after adrenalectomy in rats.
Learning and Memory, 17:241-245. Figure 2
Anxiety Disorders
We are all subject to anxiety, usually acutely in response to stress or less
commonly as chronic reactivity—an increased anxiety response—even
to seemingly minor stressors. Anxiety reactions certainly are not
pathological; they are likely an evolutionary adaptation for coping with
adverse conditions. But anxiety can become pathological and make life
miserable. Anxiety disorders are among the most common psychiatric
conditions. The DSM-5 lists 10 classes of anxiety disorders that together
affect 15 percent to 35 percent of the population at some point in their
life lives (see Figure 16-14 ).
Focus 12-3 describes symptoms of anxiety disorders; Section 6-5 ,
how stress-induced damage contributes to PTSD
Imaging studies of people with anxiety disorders record increased
baseline activity in the cingulate cortex and parahippocampal gyrus and
enhanced responsiveness to anxiety-provoking stimuli in the amygdala
and prefrontal cortex. This finding suggests excessive excitatory
neurotransmission in a circuit involving cingulate cortex, amygdala, and
the parahippocampal region. Because drugs that enhance the inhibitory
transmitter GABA are particularly effective in reducing anxiety,
researchers hypothesize that excessive excitatory neurotransmission in
this circuit is anxiety. But what causes it?
Figure 12-18 diagrams these limbic system structures and charts
their major connections.
Considerable interest has developed in investigating why some people
show pathological anxiety to stimuli to which others have a milder
response. One hypothesis, covered earlier, in the section on depression,
is that stressful experiences early in life increase susceptibility to a
variety of behavioral disorders, especially anxiety disorders.
Although anxiety disorders used to be treated primarily with
benzodiazepines, such as Valium, now they are also treated with SSRIs,
such as Prozac, Paxil, Celexa, and Zoloft. Antidepressant drugs do not
act immediately, however, suggesting that the treatments must stimulate
some gradual change in brain structure, much as these drugs act in
treating depression.
Cognitive-behavioral therapy is as effective as drugs in treating
anxiety. The most effective behavioral therapies expose and re-expose
patients to their fears. For example, treating a phobic fear of germs
requires exposing the patient repeatedly to potentially germy
environments, such as public washrooms, until the discomfort abates.
One more time: a pill is not a skill.
16-4 REVIEW
Understanding and Treating Behavioral Disorders
Before you continue, check your understanding.
1 . Schizophrenia is a complex disorder associated with neurochemical
abnormalities in ___________, ___________, and ___________.
2 . Schizophrenia is associated with pronounced anatomical changes in
the ___________ and cortices.
3 . The monoamine-activating systems that have received the most
investigation related to understanding depression are ___________
and ___________.
4 . The most effective treatment for depression and anxiety disorders is
___________.
5 . Describe the main difficulty in linking genes to schizophrenia.
Answers appear at the back of the book.
DBS
Electrophysiological
Noninvasive manipulation
ECT
TMS, rTMS
Pharmacological
Chemical administration
Antibiotics or antivirals
Psychoactive drugs
Neurotrophic factors
Nutrition
Behavioral
Manipulation of experience
Behavior modification
Neuropsychological
rt-fMRI
A Chemical Message
1 . chemical synapses; gap junction
2 . experience; learning
3 . axodendritic; axosomatic; axomuscular; axoaxonic; axosynaptic;
axoextracellular; axosecretory; dendrodendritic
4 . dendrite; cell body or soma
5 . When an action potential reaches an axon terminal, (1) a chemical
transmitter that has been synthesized and stored in the axon terminal
(2) is released from the presynaptic membrane into the synaptic cleft.
The transmitter (3) diffuses across the cleft and binds to receptors on
the post-synaptic membrane. (4) The transmitter is deactivated.
REVIEW 5-2
Principles of Psychopharmacology
1 . psychoactive drugs; psychopharmacology
2 . blood–brain barrier; brain
3 . synapses; agonists; antagonists
4 . tolerance; sensitization
5 . in any order: feces; urine; sweat; breath; breast milk
6 . (a) Drug use at home is unlikely to condition drug-taking behavior to
familiar home cues, so tolerance is likely to occur. (b) Novel cues in a
work setting may enhance conditioning and so sensitize the occasional
drug user.
REVIEW 6-2
Neuronal Activity
1 . bars of light
2 . temporal
3 . trichromatic theory
4 . opponent
5 . RGCs are excited by one wavelength of light and inhibited by
another, producing two pairs of what seem to be color opposites—red
versus green and blue versus yellow.
REVIEW 9-5
Reward
1 . rewarding
2 . wanting; liking
3 . in any order: dopamine; opioid; benzodiazepine–GABA
4 . Intracranial self-stimulation is a phenomenon whereby animals learn
to turn on a stimulating electric current to their brain, presumably
because it activates the neural system that underlies reward.
CHAPTER 13
Why Do We Sleep and Dream?
REVIEW 13-1
Sleep Disorders
1 . insomnia; narcolepsy
2 . drug-dependent insomnia
3 . sleep paralysis; cataplexy
4 . REM sleep behavioral disorder or REM without atonia; subcoerulear
5 . Orexin is probably only one of many factors related to waking
behavior, as animals with narcolepsy can be awake but then collapse
into sleep.
CHAPTER 14
How Do We Learn and Remember?
REVIEW 14-1
Intelligence
1 . g factor or general intelligence; multiple intelligences; convergent
and divergent; intelligence A; intelligence B
2 . structural; functional
3 . any 3, in any order: gyral patterns; cytoarchitectonics; vascular
patterns; neurochemistry
4 . epigenetic
5 . Both fMRI and ERP studies show that the efficiency of prefrontal–
parietal circuits is related to standard intelligence measures.
“Executive” function is related to gray matter volume in the frontal
lobe.
REVIEW 15-7
Consciousness
1 . complex
2 . Consciousness
3 . consciousness
4 . Movements in which speed is critical, such as hitting a pitched ball,
cannot be controlled consciously.
CHAPTER 16
What Happens When The Brain Misbehaves?
REVIEW 16-1
FOR STUDENTS
▀ Full e-Book of An Introduction to Brain and Behavior, Fifth Edition
▀ LearningCurve
▀ Video Activities
▀ Summative Quizzes
▀ Neuroscience Tool Kit Activities
▀ Interactive Flashcards
FOR INSTRUCTION
▀ Lecture Slides
▀ Chapter Figures, Photos, and Tables
▀ Downloadable Diploma Computerized Test Bank
▀ Instructor’s Resources