0% found this document useful (0 votes)
91 views94 pages

Kolb 2021 - Fundamentals of Human Neuropsychology, 8th Ed. (1) - 1615-1708

Uploaded by

naomile11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views94 pages

Kolb 2021 - Fundamentals of Human Neuropsychology, 8th Ed. (1) - 1615-1708

Uploaded by

naomile11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 94

CHAPTER 19

Language

PORTRAIT
Multilingual Meltdown
K.H., a Swiss-born architect, was a professor of architecture at a major U.S. university.
Although German was his first language and he was fluent in French and Italian, his primary
language had become English.

He had been an outstanding student, excelling at writing and meticulous about his spelling
and grammar. When his mother complained that he was making spelling and grammatical
errors in his letters to her, written in German, he was astonished. He suspected that he was
forgetting his German and resolved to prevent that from happening.

A few weeks later, K.H. asked a colleague to review a manuscript, in English, that he had just
completed. His colleague commented that K.H. must be working too hard because the
manuscript was filled with uncharacteristic errors. At about the same time, K.H. noticed that
the right side of his face felt “funny.” A neurologist found a small tumor at the junction of the
motor–face area and Broca’s area in the le hemisphere.
The tumor, which was benign, was removed surgically. In the first few days a er the surgery,
K.H. was densely aphasic: he could neither talk nor understand oral or written language.
Although he had been warned that aphasia was likely and that it would be temporary, he was
visibly upset. By the end of the first week, he could understand oral language, but his speech
was still unintelligible, and he could not read.

By the end of the second week, K.H. was speaking German fluently but had difficulty with
English, although it was certainly understandable to him. He was still unable to read in any
language but believed that he could read German and could be convinced otherwise only
when he was informed that the book he was supposedly reading was upside down! His
reading and English slowly improved, but even now, years later, K.H. finds spelling in any
language difficult, and his reading is slower than would be expected for a person of his
intelligence and education.

Using language is a precious ability, yet we can take it for granted,


as K.H. did before he was stricken. Think about how much your
daily life depends on your ability to talk, listen, and read. We even
talk to ourselves. As children, we learn language long before we can
catch a ball or ride a bicycle, using words to identify and learn
about our environment. We use language to inform and persuade
and to entertain ourselves with poetry, song, and humor. Indeed,
much humor is based on nuances of language and on double
entendres. Using language is our most complex skill, and we can
approach its study in many ways. One place to start is by
considering what language is.
19.1 What Is Language?
The word language derives from langue, an Anglo-French word for
“tongue,” referring to a convention that describes language as the
use of sound combinations for communication. But language also
includes the idea that this use of sounds is guided by rules that,
when translated into other sensory modalities, allow for equivalent
communication through gestures, touches, and visual images.
Many other animal species have evolved forms of communication,
but no other species uses language as humans do. That said, no
universal agreement has emerged on what language is, and
differences in defining language also lead to differences of opinion
about how the brain produces language.

Language Structure
Like most other people, you probably think of words as the
meaningful units of language. Linguists break down language
differently, as summarized in Table 19.1. They view words as
consisting of fundamental language sounds, called phonemes, that
form words or parts of words. Phonological analysis determines
how we link phonemes together.

Table 19.1 Components of a Sound-Based Language

Phonemes Individual sound units whose concatenation, in particular order, produces


morphemes

Morphemes Smallest meaningful units of a word, whose combination forms a word

Lexicon Collection of all words in a given language; each lexical entry includes all
information with morphological or syntactical ramifications but does not
include conceptual knowledge

Syntax Grammar — admissible combinations of words in phrases and sentences

Semantics Meanings that correspond to all lexical items and all possible sentences

Prosody Vocal intonation — the tone of voice — which can modify the literal
meaning of words and sentences

Discourse Linking of sentences to constitute a narrative

We combine phonemes to form morphemes, the smallest


meaningful units of words, such as a base (do in undo), an affix (un
in undo or er in doer), or an inflection (ing in doing or s in girls).
Some morphemes are themselves complete words; others must be
combined to form words.

A lexicon comprises a memory store that contains words and their


meanings — hypothetically, all of the words in a given language.
Words are strung together in patterns that conform to the
language’s rules of grammar — its syntax. A key aspect of syntax is
appropriate choice of verb tense.
The meaning connected to words and sentences is referred to,
collectively, as semantics. Vocal intonation — the tone of voice,
called prosody — can modify the literal meaning of words and
sentences by varying stress, pitch, and rhythm. Discourse, the
highest level of language processing, involves stringing together
sentences to form a meaningful narrative.

This linguistic discussion emphasizes the acoustical nature of basic


language components, but analogs exist in visual-based reading, the
touch language of Braille, and the movement language of signing
(e.g., American Sign Language [ASL], or Ameslan). A morpheme in
ASL is the smallest meaningful movement.

The traditional criterion linguists use to recognize language is the


presence of words and word components; another characteristic of
human language is its use of syllables made up of consonants and
vowels. Our mouths are capable of producing consonants and
combining them with vowels to produce syllables. Nonhuman
species do not produce syllables, primarily because they do not
produce consonants.

Producing Sound
The basic anatomy that enables humans to produce sound consists
of two sets of parts, one set acting as the sound source and the
other as filters, as modeled in Figure 19.1A and charted in Figure
19.1B. Air exhaled from the lungs drives oscillations of the vocal
cords (or vocal folds), folds of mucous membrane attached to the
vocal muscles, located in the larynx, or “voice box,” the organ of
voice. The rate of vocal-fold oscillation (from about 100 Hz in adult
men, 150 to 250 Hz in women, and up to 500 Hz in small children)
determines the pitch (low to high frequency) of the sound
produced.

Figure 19.1

Vocal Production (A) Modeling how the vocal tract filters speech sound energy from the vocal
cords to produce formants. (B) Flowchart for speech production: the larynx is the source of
sound energy, and the vocal tract filters (shapes) the energy to produce the final sound
output, speech. (C) Cross-sectional views comparing larynx and vocal-tract position in a
chimpanzee and a human. (Information from Fitch, 2000.)
Acoustical energy thus generated then passes through the vocal
tract (pharyngeal, oral, and nasal cavities) and finally out through
the nostrils and lips. As this energy passes through the vocal tract,
the structures in the vocal tract filter (or shape) sound waves
specific to each sound, called formants. Formants modify the
emitted sound, allowing specific frequencies to pass unhindered
but blocking transmission of others. (Review the spectrogram
representation in Figure 15.13.) Filtering plays a crucial role in
speech: the length and shape of the vocal tract determine formant
characteristics, which are modified rapidly during speech by the
movements of the articulators (tongue, lips, so palate, and so on).
Formants emphasize sound frequencies that are meaningful in
speech.

The vocal apparatus that produces formants marks a major


difference between us and other apes. The human oral cavity is
longer than in other apes, and the human larynx is situated much
lower in the throat, as shown in Figure 19.1C. Starting at about 3
months of age, the human larynx begins a slow descent toward its
adult position, and it settles a er age 3 to 4 years. A second, shorter
descent takes place in human males at puberty.

The evolution of our mouth for feeding no doubt had an influence


on the sounds that we can make. Descent of the human larynx,
however, is a key evolutionary and developmental innovation in
speech. By allowing the tongue to move both vertically and
horizontally within the vocal tract, a lowered larynx enables us to
vary the area of the oral and pharyngeal tubes independently,
adding to the variety of sounds we can produce easily. Sound energy
fuels our primary means of communicating, but language exists in
forms other than sound, including gestures, the touch language of
Braille, and the visual languages of reading and ASL. Language need
not involve sound.

Core Language Skills


Four core skills underlie human language: (1) categorizing, (2)
category labeling, (3) sequencing behaviors, and (4) mimicry. It is
likely that each of these four abilities depends upon somewhat
different neural circuits.

Categorization
Multiple parallel hierarchical neural networks function to process
incoming sensory stimulation. As the cortex expands through our
early development and the number of networks that process
parallel sensory information increases, binding (integrating) the
information into a single perception of reality becomes more
difficult. The brain must determine which of myriad kinds of
sensory information reaching the cortex correspond to a given
object in the external world. Thus, it becomes necessary to
categorize information — for example, to tag some qualities as
belonging to plants and others as belonging to animals.
Assigning tags to information makes it easier to perceive the
information and to retrieve it later when needed, as we do, for
example, with cladograms used to classify plants and animals (see
Figure 2.1). The ventral visual stream coursing through the
temporal lobes participates in object categorization, and the dorsal
stream may also participate by making relatively automatic
distinctions between objects, such as plant versus animal or human
versus nonhuman. (In Section 19.3, we describe how the brain
categorizes language into semantic maps.)

Labeling Categories
Words are the ultimate categorizers, and the use of words as tags to
label categories rests on a preexisting perception of what categories
are. The development of human language may have entailed
selection for novel means of categorization that not only allowed for
combining and grouping simple sensory stimuli but also provided a
means of organizing events and relationships.

This process of categorization can stimulate the production of word


forms about that concept (the category); conversely, it can cause
the brain to evoke the concepts in words. Thus, a man who was
once a painter but is now color-blind can know and use the words
(labels) for colors, even though he can no longer perceive or
imagine what those labels mean. He has, in a sense, lost his
concept of color, but his words can still evoke it. In contrast,
certain brain-lesion patients retain their perception of color, and
thus the concept, but have lost the language with which to describe
it. They experience colors but cannot attach labels to them.

Thus, labeling a category includes not only identifying it — a


function of the brain’s language network — but also organizing
information within the category. For example, within the category
labeled tools, our brains may organize “hammer” as a noun, a verb,
an adjective, and so on.

Sequencing Behavior
Human language employs transitional larynx movements to form
syllables. Le -hemisphere structures associated with language
form part of a system that has a fundamental role in ordering vocal
movements, such as those used in speech. We can also sequence
face, body, and arm movements to produce nonverbal language.
Sequencing words to represent meaningful actions likely makes use
of the same dorsal-stream frontal cortex circuits that sequence
motor action more generally.

Mimicry
Mimicry fosters language development. Athena Vouloumanos and
Janet Werker (2007) have found that from birth, babies show a
preference for listening to speech over other sounds. When they
begin to babble, they are capable of making the sounds used in all
languages. They also mimic and subsequently prefer the language
sounds made by the people in their lives. Over the course of their
early development, infants also lose the ability to distinguish
sounds that they no longer use. According to some estimates, in the
formative years, by mimicking the speech of others, children may
add as many as 60 new words each day to their vocabularies. Our
mirror system neurons respond when we see others make
movements and also when we make the same movements (see
Section 9.1). One explanation of mimicry is that mirror neurons in
the cortical language regions are responsible for our ability to
mimic the sounds, words, and actions that comprise language.
19.2 Searching for the Origins of
Language
Two theoretical approaches attempt to explain human language
origins. Discontinuity theories propose that language evolved
rapidly and appeared suddenly, occurring in modern humans in the
past 200,000 years or so. Continuity theories propose that language
evolved gradually, possibly by modifications of communication
common to many animal species that then developed into language
in ancestral hominin species. The Snapshot describes how one gene
that has been related to human language bears on these theories.

SNAPSHOT
Genetic Basis for an Inherited Speech and Language
Disorder
Almost half of the members of three generations of the KE family are affected by a severe
disorder of speech and language inherited as an autosomal (non–sex chromosome)
dominant trait in the forkhead-box protein 2 gene (FOXP2) (Vargha-Khadem et al., 2005). The
impairment, displayed by 15 of 37 family members (Figure A), is best characterized as a
deficit in sequencing articulation patterns, rendering speech sometimes agrammatical and
o en unintelligible. The orofacial aspect affects the production of sound sequences, which
makes the deficit resemble Broca aphasia. The affected members are impaired on tests of
mouth movement (oral praxis), including simple movements of clicking the tongue and
movement sequences (e.g., blowing up the cheeks, then licking the lips, and then smacking
the lips).
Figure A

KE family pedigree, showing the extent of the inherited language impairment.


(Information from Watkins et al., 2002.)

MRI analysis of the affected family members’ brains showed significantly less gray matter in
a number of regions of the neocortex, including the motor regions of the neocortex, along
with increased gray matter in a number of other regions of the neocortex, especially the
somatosensory and auditory regions (Figure B). Many of these brain regions are associated
with producing facial movements necessary for language, among other sensory–motor
behaviors.
Figure B

Brain regions containing altered gray matter in members of the KE family affected by
the FOXP2 gene mutation. Blue represents a decrease in gray matter, and red represents
an increase in gray matter. (Research from Co et al., 2020.)

The gene that makes the forkhead-box protein 2 is implicated in this disorder. The FOXP2
gene regulates the expression of more than 300 other genes during development and during
learning, mainly by blocking their expression. The genes regulated by FOXP2 are present in
different brain regions and also other body regions, including the lungs. Study of the FOXP2
gene has led to the study of related genes, including the FOXP1 and FOXP4 families of genes.

The FOXP genes are especially active during development, suggesting their involvement in
neocortical development, and they remain active throughout life, suggesting an ongoing role
in cortical function. Mutations of FOXP genes have also been implicated in some cases of
autism spectrum disorder, attention-deficit/hyperactivity disorder, and other
neurodevelopmental disorders ( Co et al., 2020).

The FOXP2 gene is highly conserved in that it is similar in many nonhuman animal species,
where it also plays a role in the development of many parts of the brain as well as other body
organs. The FOXP2 gene is expressed in brain areas that regulate song learning in birds,
singing in whales, and ultrasonic vocalizations in mice. FOXP2 gene mutations in these
species affect a number of aspects of behavior, including sound production.

The FOXP2 gene has undergone two mutations over the course of hominin evolution. Such
rapid evolution suggests that these mutations may have altered neuronal circuitry in the
brain’s sensorimotor regions to enable movements that may have originally been related to
mouth movements for eating and drinking to contribute to human speech.

Co, M., Anderson, A. G., & Konopka, G. (2020). FOXP transcription factors in vertebrate brain
development, function, and disorders. Wiley Interdisciplinary Reviews of Developmental
Biology, 30, e375. https://doi.org/10.1002/wdev.375.
Vargha-Khadem, F., Gadian, D. G., Copp, A., & Mishkin, M. (2005). FOXP2 and the
neuroanatomy of speech and language. Nature Reviews Neuroscience, 32, 131–138.
Watkins, K. E., Dronkers, N. F., & Vargha-Khadem, F. (2002). MRI analysis of an inherited
speech and language disorder: structural brain abnormalities. Brain, 125(3), 465–478.

Discontinuity Theories
Discontinuity theories emphasize the syntax of human languages
and propose that language arose quite suddenly in modern humans
(Berwick et al., 2013). One emphasis is recognizing the unique
species-specific “computational core” of human language — its
sounds, syntax, and semantics.

Another approach of discontinuity theories attempts to trace


language origins by comparing similarities in word use. For
example, Morris Swadish (1971) developed a list of 100 basic lexical
concepts that he expected would be found in every language. These
concepts included such words as I, two, woman, sun, and green. He
then calculated the rate at which these words would have changed
as new dialects and languages emerged. His estimates suggest a rate
of change of 14% every 1000 years. By comparing the lists of words
spoken in different parts of the world today, he estimated that,
between 10,000 and 100,000 years ago, everyone spoke the same
language. According to Swadish’s logic, the first language would
have been spoken by everyone, with diversification beginning
almost as soon as that language had developed.

Philip Lieberman (2003) studied the vocal-tract properties that


enable modern humans to make the sounds used for language (see
Figure 19.1C). Neither modern apes nor newborn humans can
produce all of the sounds used in human speech. Lieberman
concludes that language appeared along with the lowered vocal
tract in modern humans within about the past 200,000 years.

Another argument for aligning the origin of language development


with modern humans, rather than with earlier hominin species, is
that the ability to write and the ability to speak have a lot in
common. Both require very fine movements and many movement
transitions. Therefore, speech and writing could have appeared at
about the same time. Alexander Marshack (1971) found that the first
symbols attributed to modern humans date to about 40,000 years
ago, adding to the evidence that speech appeared before or at least
at about this time.

Peter MacNeilage (1998) argues that the critical feature of language


is articulation — basically what the mouth does. The mouth is
usually opened once for each vocal episode, and the shape of the
cavity between the lips and the vocal tract modulates the sound.
Articulation is unique to humans and is employed in virtually every
utterance of every language (with the exception of a few words
consisting of a single vowel).

In human speech, the mouth alternates more or less regularly


between a relatively open (for vowels) and a relatively closed (for
consonants) configuration. To MacNeilage, the question this
observation raises is not how the vocal tract changed but how the
brain changed to provide the motor control necessary for the
mouth to make syllables. Many of these changes, he reasons, are
likely related to the development of fine mouth movements made in
eating the foods that comprise the modern human diet.

What seems to link these separate lines of evidence, making the


discontinuity theory’s recency hypothesis plausible, is that modern
humans first appeared within the past 200,000 years. The evolution
of Homo sapiens was quite sudden, their vocal tract was low, they
were capable of making de mouth movements, they created art,
and so the theory is that one of their adaptive strategies was vocal
language.

Continuity Theories
Continuity theories also consider many lines of evidence, including
the adaptation of animal vocalization for language (Schoenemann,
2012). Perhaps it is a tribute to the imagination with which
speculators approached the question of which vocalizations that in
1866, the Linguistic Society of Paris banned future discussion of
vocalization theory. We will not let that ban deter us.

Gordon Hewes (1977) reviewed many variants of animal vocalization


theory, including the pooh-pooh theory (language evolved from
noises associated with strong emotion), the bow-wow theory
(language evolved from noises first made to imitate natural sounds),
the yo-he-ho theory (language evolved from sounds made to
resonate with natural sounds), and the sing-song theory (language
evolved from noises made while playing or dancing).

Scientific evidence that vocalization contributes to language origins


comes from studying chimpanzees. The results of Jane Goodall’s
studies on the chimpanzees of Gombe in Tanzania indicate that our
closest relatives have as many as 32 separate vocalizations. Goodall
(1986) noted that the chimps seem to understand these calls much
better than humans do, although her field assistants, the people
most familiar with the chimps, can distinguish them well enough to
claim that the actual number is higher than 32. Figure 19.2
illustrates the wide range of vocalizations made by free-living
chimpanzees.
Figure 19.2

Precursors of Language Chimpanzee calls and the emotion or feeling with which
they are most closely associated. (Goodall, J. The Chimpanzees of Gombe.
Cambridge, Mass.: Harvard University Press, 1986. Permission granted by The Jane
Goodall Institute.)

Jared Taglialatela and his coworkers (Issa et al., 2019) have


examined the microstructure of bonobo and gorilla brains and note
many differences between the two species in communication-
related brain regions. These researchers have also recorded
vocalizations made by the bonobo Kanzi and examined neural
structures related to bonobo communication. They report that
Kanzi made both communicative sounds and sounds related to
eating. As the Chapter 2 Portrait notes, these investigators found
that Kanzi’s food-related peeps were structurally different in
different contexts. Thus, “chimpanzeeish” is a form of
communication.

Varied evidence supports the contribution that gestures have made


to language evolution. Many animals communicate with movement.
At the simplest, when one animal moves, others follow. We have all
observed the gestures displayed by a dog that wants us to open a
door. We understand such gestures and might also make distinct
gestures when asking the dog to go through the door.

We can observe the rudiments of subject–object–verb (SOV) syntax


in movements such as reaching for a food item (Schouwstra & de
Swart, 2014). The subject is a hand, the object is the food, and the
verb is the reach. Clearly, our pet dog watching us reach for a food
item understands. According to this idea, language begins in the
brain regions that produce movement, but the notable adaption in
human language is its specialization for communication.

Nonverbal gestures are closely related to speech. David McNeill


(2005) reports that hand and body gestures accompany more than
90% of our verbal utterances. Most people gesture with the right
hand when they speak: their gestures are produced by the le
hemisphere, as is most language. Gestures thus form an integral
component of language, suggesting that our language comprises
more than speech. The neural basis of language is not simply a
property of brain regions controlling the mouth but includes the
motor system more generally.

As early as 1878, John Hughlings-Jackson suggested that a natural


experiment would support the idea that gestural language is related
to vocal language. The loss of certain sign-language abilities by
people who had previously depended on sign language (e.g., ASL),
he reasoned, would provide the appropriate evidence that gestural
language and vocal language depend on the same brain structure.
Hughlings-Jackson even observed a case that seemed to indicate
that a le -hemisphere lesion disrupted sign language, as it would
vocal language.

Doreen Kimura (1993) confirmed that lesions disrupting vocal


speech also disrupt signing. Of 11 patients with signing disorders
subsequent to brain lesions, 9 right-handers had disorders
subsequent to a le -hemisphere lesion similar to one that would
produce aphasia in a speaking person. One le -handed patient had
a signing disorder subsequent to a le -hemisphere lesion, and
another le -handed patient had a signing disorder subsequent to a
right-hemisphere lesion. These proportions and the location of the
lesions are similar to those found for vocal patients who become
aphasic. Samuel Evans and colleagues (2019) used fMRI to compare
brain areas active in verbal speakers and in signers. Their results
support the idea that verbal language and sign language depend on
somewhat similar neural structures.

Gestures and other visual cues also play a significant role in the
listening process. You may be familiar with the cocktail-party effect.
When listening to speech in a noisy environment, we can “hear”
what a speaker is saying much better if we can see the speaker’s
lips. A phenomenon called the McGurk effect, a er its originator,
Harry McGurk (Skipper et al., 2007), offers another demonstration
of “seeing” sounds. When viewers observe a speaker say one word
or syllable while they hear a recording of a second word or syllable,
they “hear” the articulated word or sound that they saw and not the
word or sound that they actually heard. Or they hear a similar but
different word entirely. For example, if the speaker is mouthing “ga”
but the actual sound is “da,” the listener hears “ga” or perhaps the
related sound “ba.” The McGurk phenomenon is robust and
compelling.

You may also have read the transcript of a conversation that took
place between two or more people. It can seem almost
incomprehensible. Had you been present, however, your
observation of the speakers’ accompanying gestures would have
provided clarity. Taken together, studies on vocalization and studies
on gestures, including signing, show that communication is more
than vocalization and that what makes humans special is the degree
to which we communicate.

Experimental Approaches to
Language Origins
Research on language origins considers the many types of
communication different animal species employ, including
birdsong, the elaborate songs and clicking of dolphins and whales,
and the dances of honeybees. Each contains elements of the core
skills underlying language. Languagelike abilities are present in
many different brains, even brains extremely different from our
own.

Irene Pepperberg’s 30-year study of Alex, an African gray parrot,


shown in Figure 19.3, represents a remarkable contribution to
language research. Alex could categorize, label, sequence, and
mimic. Pepperberg (2008) could show Alex a tray of four corks and
ask, “How many?” Alex would reply, “Four.” He correctly applied
English labels to numerous colors, shapes, and materials and to
various items made of metal, wood, plastic, or paper. He used
words to identify, request, and refuse items and to respond to
questions about abstract ideas, such as the color, shape, material,
relative size, and quantity of more than 100 different objects. Birds
do not possess a neocortex, but parrots’ forebrains have cortexlike
connections and house an enormous number of neurons,
comparable to much larger primate brains. This anatomy likely
accounts for Alex’s ability to learn forms of “thought,” “speech,” and
“language.”
Figure 19.3

Language in Birds The African gray parrot Alex, with researcher Irene Pepperberg
and a sampling of the items he could count, describe, and answer questions about.
Alex died in 2007, aged 31.

Training for Language in Nonhuman Apes


A definitive test of the continuity theory is whether our closest
relatives, chimps, as well as other apes, can use language.
Chimpanzees share with humans some behaviors and anatomy
related to language, including handedness and le –right asymmetry
in language areas of the brain (Hopkins, 2013). In the 1940s, Keith
and Catherine Hayes (1951) raised Vicki, a chimpanzee, as a human
child. They made a heroic effort to get her to produce words, but
a er 6 years of training, she produced only four sounds, including a
poor rendition of “cup.”

Beatrice and Allen Gardner (1978) used a version of American Sign


Language to train Washoe, a year-old chimp they brought into their
home. They aimed to teach Washoe ASL hand signs for various
objects or actions (called exemplars). These signing gestures,
analogous to words in spoken language, consist of specific
movements that begin and end in a prescribed manner in relation
to the signer’s body (Figure 19.4).
Figure 19.4

Exemplars from American Sign Language The Gardners and others taught such
symbols to the chimpanzees in their studies. (Information from Gustason et al., 1975.)

Washoe was raised in an environment filled with signs. The


Gardners molded her hands to form the desired shapes in the
presence of exemplars, reinforcing her for correct movements, and
they used ASL to communicate with each other in Washoe’s
presence. She did learn to understand and to use not only nouns but
also pronouns and verbs. For example, she could sign statements
such as “You go me,” meaning “Come with me.” Attempts to teach
ASL to other species of great apes (gorilla, orangutan) have seen
similar success.

David Premack (1983) formalized the study of chimpanzee language


abilities by teaching his chimpanzee, Sarah, to read and write with
variously shaped and colored pieces of plastic, each representing a
word. Premack first taught Sarah that different symbols represent
different nouns, just as Washoe had been taught in sign language.
Sarah learned, for example, that a pink square was the symbol for
banana. She was then taught verbs so that she could write and read
such combinations as “give apple” or “wash apple.”

Premack tested Sarah’s comprehension by “writing” messages to her


— that is, by hanging up a series of symbols — and then observing
her response. Much more complicated tutoring followed, in which
Sarah mastered the interrogative (“Where is my banana?”), the
negative, and finally the conditional (if … l then). Sarah learned a
fairly complicated communication system, analogous in some ways
to simple human language.

Duane Rumbaugh launched Project Lana, which called for teaching


the chimp Lana to communicate by means of a computer-
programmed keyboard (Rumbaugh & Gill, 1977). This computer-
based training facilitated collection of the large volume of data the
language-training procedures generated. The keyboard, shown in
Figure 19.5, was composed of nine stimulus elements and nine
main colors that could be combined in nearly 1800 lexigrams to
form a language now known as Yerkish (Savage-Rumbaugh et al.,
1986).

Figure 19.5

Lana’s Keyboard Yerkish consists of nine basic design elements that are
combined to form lexigrams. The photo above shows Lana at the Yerkish
keyboard. An example of these characters is also shown: Design elements (A) are
combined to form lexigrams (B).

Lana had simply to type out her messages on the keyboard. She was
trained first to press keys for various single incentives. The
requirements became increasingly complex, and she was taught to
compose statements in the indicative (“Tim move into room”), the
interrogative (“Tim move into room?”), the imperative (“Please Tim
move into room”), and the negative (“Don’t Tim move into room”).
Eventually, Lana was composing strings of six lexigrams.
Kanzi, introduced earlier in this section, spontaneously learned to
communicate using Yerkish by watching his mother Matatla’s failed
training sessions. Kanzi’s knowledge of English words has exceeded
his knowledge of Yerkish lexigrams. To facilitate his learning, his
keyboard was augmented with a speech synthesizer. When he was 6
years old, Kanzi was tested on his comprehension of multisymbol
utterances. He responded correctly to 298 of 310 spoken sentences
of two or more utterances. Joel Wallman (1992) concluded that
Kanzi’s use of lexigrams constitutes the best evidence available to
date for the referential application of learned symbols by an ape.

Conclusions from Investigations of Language


Origins
Two explanations address the neural basis for the rudimentary
ability of other animal species to acquire some aspects of language.
The first holds that when the brain reaches a certain level of
complexity, it has the ability to perform some core language skills,
even in the absence of a massive neocortex with dedicated neural
structures. This view is applicable to modern humans’ ability to
read and write. We acquired these behaviors so recently that it is
unlikely that the brain specifically evolved to engage in them.

Another view is that all brains have communicative functions, but


the ways that communication takes place vary from species to
species. Apes, as social animals, clearly have a rudimentary
capacity to use sign language. They use gestures spontaneously, and
formal training can foster this skill. Nevertheless, apes also have a
much greater predisposition to understand language than to
produce it, which derives from observing and responding to the
many social behaviors of their compatriots. Anyone watching films
of apes’ performance in response to human vocal commands
cannot help but be impressed by their level of understanding.

Language in Hominins
Mounting evidence indicates that Neanderthals and some of our
other hominin cousins were more similar to us in language ability
than they were different. Dediu and Levinson (2013) summarize
evidence from discoveries related to Neanderthal culture and
genetics to argue that they likely shared similar language abilities.
This pushes the origins of language back 800,000 years or so, to the
common ancestor of Neanderthal and modern humans. Having
made such a grand leap, it may not be farfetched to consider that
verbal language is a property of the hominin brain. As such, it may
have become a primary driver in hominin evolution.

Taken together, this body of research supports the view of


continuity theorists that the basic capacity for languagelike
processes was present to be selected for in the common ancestor of
humans and apes, and the rudiments of verbal language began with
the origins of the hominin lineage.
19.3 Localization of Language
As described in Section 1.3, early investigations suggested that
language is a function of the le hemisphere and that Broca’s and
Wernicke’s areas, and the pathway connecting them, form the
primary language system of the brain. More recent anatomical and
stimulation methods suggest that there are two neural systems: one
for sounds and one for meanings. In addition, evidence from PET
and fMRI studies, detailed below, indicates that there is a language
network that involves surprisingly large areas of the neocortex of
both hemispheres. These findings suggest that the cortical
representation of language can be envisioned as a brain dictionary.

The following sections will follow this progression in our


understanding of language from a localized brain function to a
distributed function involving large regions of the le and the right
hemispheres.

Wernicke–Geschwind Model
Broca and Wernicke identified speech areas in patients who had
lesions from stroke. Wernicke’s early neurological model of
language and its revival in the 1960s by Norman Geschwind, as the
Wernicke–Geschwind model, were both based entirely on lesion
data. As diagrammed in Figure 19.6, the three-part model proposes
that comprehension is (1) extracted from sounds in Wernicke’s area
and (2) passed over the arcuate fasciculus pathway to (3) Broca’s
area, to be articulated as speech. Other language functions access
this comprehension–speech pathway as well. The Wernicke–
Geschwind model has played a formative role in directing language
research and organizing research results. More recent research
suggests, however, that although the terms Broca aphasia (loss of the
ability to produce written, spoken, or gestural language) and
Wernicke aphasia (loss of the ability to comprehend speech) may be
useful descriptors of the effects of stroke to anterior and posterior
cortical regions, they are less useful in designating the brain areas
involved in language. That is because the posterior and anterior
language zones are more complex than anticipated by the
Wernicke–Geschwind model.
Figure 19.6

Wernicke–Geschwind Model The classical anterior and posterior speech zones,


connected by the arcuate fasciculus.

Anterior and Posterior Language


Regions
As shown in the anatomical reconceptualization in Figure 19.7,
Broca’s area (areas 44 and 45) and the regions surrounding it are
composed of a number of cortical subregions now suggested to be
involved in language. These findings were derived from analyzing
various receptor types in these areas’ neurons (Amunts et al., 2010).
Figure 19.7

Subdivision of Broca’s Area Anatomical map of Broca’s area that includes areas 44
and 45 with subdivisions, three subdivisions added to area 6, and a plethora of
smaller regions. (Research from Amunts et al., 2010.)

Broca’s area consists of two subdivisions: an anterior region and a


posterior region in area 45 and a dorsal region and a ventral region
in area 44. Ventral premotor area 6, related to facial movements and
containing mirror neurons, has three subdivisions. Surrounding
and wedged between these regions and subdivisions in Figure 19.7
are numerous smaller areas. At present, no imaging or behavioral
work has assigned functions to these smaller areas.

This modern reconceptualization of the anatomy within and


surrounding Broca’s area points to the conclusion that many
challenges prevent us from fully understanding the anatomical
basis of the language function in this brain region.

Binder (2017) proposes on the basis of a meta-analysis of fMRI


studies that the temporal and parietal regions of the cortex also
consist of a number of subregions, each with different functions.
As illustrated in Figure 19.8, at least four regions contribute to the
phoneme components of words, the semantic meaning of words,
and the assembly of words into sentences. A region in the medial
superior temporal gyrus uses sound information for the perception
of phonemes. An adjacent, more posterior region of the superior
temporal gyrus uses phoneme perception to construct the sounds
of words. Regions around the arcuate gyrus and the anterior and
ventral temporal lobe are associated with word meanings, and a
region in the central portion of the middle temporal gyrus is
associated with sentence construction. Connections between these
areas assemble sounds into semantic information — the meaning of
words and sentences.
Figure 19.8

Posterior Cortical Regions for Phonemes and Semantics Four regions of the
posterior language region. The middle superior temporal gyrus is associated with
phonological perception (blue), the posterior superior temporal gyrus is associated
with phonological sounds of words (purple), the area of the arcuate gyrus and the
ventral temporal cortex is associated with word meanings (green), and the central
portion of the middle temporal gyrus is associated with sentence construction (red).
(Research from Binder, 2017.)

Dual Pathways for Language


On the basis of brain-imaging studies and other evidence, Evelina
Fedorenko and Sharon Thompson-Schill (2014) proposed that the
temporal and frontal cortices are connected by pairs of dorsal and
ventral language pathways, which are viewed as extensions of the
dorsal and ventral visual streams (Figure 19.9). Thus, Thompson-
Schill proposed two pathways rather than the single pathway
featured in the Wernicke–Geschwind model.

Figure 19.9

Dual Language Pathways Dorsal language pathways convey phonological


information for articulation; ventral pathways convey semantic information for
meaning. All the pathways are involved in syntax and may contribute to short- and
long-term memory for language. (Research from Berwick et al., 2013.)

The double-headed arrows on both paired pathways in Figure 19.9


indicate that information flows in both directions between the
temporal and frontal cortices. Visual information enters the
auditory language pathways via the dorsal and ventral visual
streams and contributes to reading. Information from body-sense
regions of the parietal cortex also contributes to the dorsal and
ventral language pathways and likely contributes to touch language,
such as Braille. Noteworthy in this new model, the ventral premotor
region of area 6 is a target of the dorsal language stream, and
Brodmann’s area 47, located anterior to area 45, is a target in the
ventral language stream.

At the most basic level of analysis, the dorsal language pathways are
proposed to transform sound information into motor
representation — to convert phonological information into
articulation. The ventral language paths are proposed to convey
phonological information into semantic information. Information
flow in the dorsal pathway is bottom-up, as occurs when we are
asked to repeat nonsense words or phrases. Thus, the temporal
cortex assembles sounds by phonetic structure and passes them
along to the frontal cortex for articulation. No meaning is assigned
to sounds in this pathway. Information flow in the ventral pathway
is proposed to be more top-down, assigning meaning to words and
phrases, as occurs when we assign a specific meaning to a word,
such as “hammer,” that has various meanings.

The dorsal and ventral language pathways are engaged in syntax,


the arrangement of words and sentences. The dorsal pathway
categorizes sounds in terms of frequency of association, and the
ventral pathway extracts meaning from the grammatical
organization of words. Both sets of language pathways are involved
in short- and long-term memory for the phonetic and semantic
components of speech, respectively. Nonverbal speech, including
reading and sign language from the visual cortex and Braille from
the parietal cortex, also uses these pathways.

In short, there is a dorsal pathway for sounds and a ventral pathway


for meaning. We can speculate that some aphasic patients who can
read but do not understand the meaning of what they read have
damage to the ventral language pathways. Similarly, some patients
who cannot articulate words but can understand them might have
damage to the dorsal pathway. Patients with damage to both
language pathways would not be able to repeat words (mediated by
the dorsal pathways) or articulate meaningful words (mediated by
the ventral pathways).

Speech Zones Mapped by Brain


Stimulation and Surgical Lesions
Wilder Penfield and others identified the neocortical language
zones, particularly those pertaining to speech, using intracortical
stimulation during surgery. (See Section 9.1 for more on Penfield’s
work.) Statistical analyses of results from hundreds of patients have
contributed to the mapping of these regions, including the classical
Broca’s and Wernicke’s areas in the le hemisphere, as well as the
sensory and motor representations of the face and the
supplementary speech area in both hemispheres. Damage to these
sensory and motor areas produces transient aphasia (aphasia from
which considerable recovery occurs).

Cortical stimulation of areas diagrammed in Figure 19.10 produces


either positive effects, eliciting vocalization that is not speech but
rather a sustained or interrupted vowel cry, such as “Oh,” or
negative effects, inhibiting the ability to vocalize or to use words
properly, including a variety of aphasia-like errors:

Total speech arrest or an inability to vocalize spontaneously This


error results from stimulation throughout the shaded zones in
Figure 19.10.
Hesitation and slurred speech Hesitation results from stimulation
throughout the zones shaded in Figure 19.10, whereas slurring
results primarily from stimulation of the dorsal regions in
Broca’s area and the ventral facial regions of premotor and
motor cortices.
Distortion and repetition of words and syllables Distortion differs
from slurring in that the distorted sound is an unintelligible
noise rather than a word. These effects result primarily from
stimulating Broca’s and Wernicke’s areas, although occasionally
from stimulating the face area as well.
Number confusion while counting For example, a patient may
jump from “6” to “19” to “4,” and so on, resulting from
stimulation of Broca’s or Wernicke’s area.
Inability to name objects despite retained ability to speak An
example is, “That is a … I know. That is a.…” When the current
was removed, the patient was able to name the object correctly.
Another example is, “Oh, I know what it is. That is what you
put in your shoes.” A er withdrawal of the stimulating
electrodes, the patient immediately said “foot” (Penfield &
Roberts, 1959, p. 123). Naming difficulties arise from
stimulation throughout the anterior (Broca’s) and posterior
(Wernicke’s) speech zones.
Misnaming and perseverating Misnaming may occur when a
person uses words related in sound, such as “camel” for
“comb,” uses synonyms, such as “cutters” for “scissors,” or
perseverates by repeating the same word. For example, a
person may name a picture of a bird correctly but may also call
the next picture — of a table — a bird. Misnaming, like other
naming difficulties, occurs during stimulation of both the
anterior and the posterior speech zones.
Figure 19.10
Speech Interference Regions where electrical stimulation has been shown to affect
speech. Damage to areas around Broca’s and Wernicke’s areas is proposed to
produce chronic aphasia, whereas damage to sensory and motor regions is proposed
to produce transient aphasia. Damage outside these areas does not produce aphasia.

George Ojemann (2003) reports that during stimulation of Broca’s


area, patients were unable to make voluntary facial movements,
and stimulation of these same points may also disrupt phonemic
discrimination and gestures, such as hand movements, associated
with speech. Most reports agree that the extent of the cortical
language zones as marked by electrical stimulation and surgical
lesions varies considerably among subjects. It is noteworthy that
these classic studies were performed using single nouns; brain
regions involved in speech would likely be larger and perhaps
somewhat different were verbs and sentence stimuli also used
(Rofes & Miceli, 2014). We can add to this point that brain
stimulation would be unlikely to elicit narrative, as in storytelling, a
behavior that lesion studies suggest the right hemisphere
contributes to.

In large part, these classical stimulation studies have been


interpreted as being consistent with the Wernicke–Geschwind
model of language. More recent studies using TMS have a somewhat
different take on stimulation-induced results with respect to
language organization.
Speech Zones Mapped by Transcranial Magnetic
Stimulation (TMS)
Intracortical microstimulation and lesions have numerous
drawbacks as methods for studying the neural basis of language:
the procedures are performed during surgery in which a portion of
the skull is removed, and the patients o en have preexisting brain
conditions that may lead to anomalous language organization. In
contrast, transcranial magnetic stimulation (TMS) can be used to
noninvasively explore the neural basis of language in healthy
people (Burke et al., 2019).

TMS can interfere with neural function, producing a virtual lesion


lasting from tens of milliseconds to as long as an hour. At
appropriate frequencies and intensities, TMS can prime neurons to
enhance reaction times for behaviors dependent on the region that
is stimulated. TMS is relatively easy to use, can be used repeatedly,
and, when combined with MRI, can allow predetermined brain
regions to be examined under controlled experimental conditions.

TMS has drawbacks in that the stimulator produces a sound that


can cue a participant or subject to the stimulation. In addition, the
stimulation must pass through the scalp, skull, and meninges and
can cause muscle contractions, discomfort, and pain. Finally, the
stimulation does not easily access regions located deep within sulci.
New modifications to TSM will likely bypass these drawbacks.
Mapping language regions with TMS aids in defining cortical
contributions of language, and TMS has been used to map specific
brain regions, such as Broca’s area (Kim et al., 2014). Participants
were presented word pairs on a computer screen and required to
decide whether the words meant the same thing (e.g., gi and
present) or sounded the same (e.g., key and quay). Stimulation of the
anterior region of Broca’s area increased reaction times for the
semantic condition but not for the phonological condition
(illustrated in Figure 19.11), whereas stimulation of the posterior
region of Broca’s area increased the reaction time for the
phonological condition but not for the semantic condition. These
results are consistent with fMRI studies, such as Binder’s work
(2017) described above, that show that the anterior region of Broca’s
area is implicated in semantic processing (processing the meaning
of words) and the posterior region of Broca’s area is implicated in
phonological processing (the production of words).
Figure 19.11

Phonological and Semantic Regions of Broca’s Area Stimulation of the anterior and
posterior extent of Broca’s area by TMS inhibits semantic and phonological
processing, respectively. (Information from Devlin & Watkins, 2007.)
Speech Zones Mapped by Brain-
Imaging Techniques
Using fMRI to measure brain areas implicated in language, Binder
and his colleagues (1997) reported that sound-processing areas
make up a remarkably large part of the brain. These researchers
presented either tones or meaningful words to 30 right-handed
participants, half of whom were male and half of whom were
female. Tone stimuli consisted of a number of 500- and 750-Hz pure
tones presented in sequence. The participants pressed a button if
they heard two 750-Hz tones in a sequence. Word stimuli were
spoken English nouns designating animals (e.g., turtle). Participants
pushed a button if an animal was both native to the United States
and used by humans. A rest condition consisted of no stimulus
presentations.

By subtracting the activation produced by tones from the activation


seen during the rest condition, the researchers identified brain
regions that are responsive to tones. By subtracting the activation
produced by words from the activation produced by tones, they
identified brain regions that are responsive to words. They found
that words activate widespread brain regions, including areas in the
occipital, parietal, temporal, and frontal lobes; the thalamus; and
the cerebellum (Figure 19.12).
Figure 19.12

Aural Activation Le -hemisphere brain regions, shaded red, and the cerebellum (not
shown), were activated while participants listened to speech, as measured by fMRI.
Participants listened to spoken English nouns designating animals and were required
to decide, in each case, whether the word indicated an animal native to the United
States and used by humans. (Research from Binder et al., 1997.)

Using PET and a wider range of stimuli, a number of research


groups have identified more specific functions for some of these
language areas, summarized in Figure 19.13. Steven Petersen’s
group (1988) used a variety of conditions to identify speech regions.
In a word-generation task, they passively presented words (in some
cases, pseudo-words or pseudo-sounds) either visually or aurally to
a passive participant. In the next task, an output task, the
participant was to repeat the word. Finally, in an association task,
the participant was to suggest a use for the object named by the
target word (e.g., if “cake” were presented, the participant might say
“eat”).
Figure 19.13

Brain Areas Activated by Language Tasks Results obtained with the use of PET to monitor
blood flow were analyzed by using subtraction methods. (Part A: research from Posner &
Raichle, 1983; part B: research from Wagner et al., 2001; part C: research from Martin et al.,
1996; part D: research from Damasio et al., 1996.)
The investigators monitored blood flow using PET and analyzed
their data using a subtraction technique. In the sensory (reading or
listening) tasks, they identified changes from baseline blood flow by
taking the difference between the activities in the two states. In the
output task, they subtracted the sensory activity, and in the
association task, they subtracted the output activity. (Figure 7.16
explains the subtraction technique.)

The results, summarized in Figure 19.13A, illustrate the


involvement of many brain regions in language and reveal some
specific contributions of each region. No overlap occurred in visual
and auditory activation during the passive task, implying that
processing word forms in the two modalities is completely
independent. During the speaking tasks, bilateral activation
occurred in the motor and sensory facial areas and the
supplementary speech area, and activation of the right cerebellum
occurred as well. Generating verbs activated the frontal lobe,
especially the le inferior region, including Broca’s area. The verb-
generation task also activated the posterior temporal cortex,
anterior cingulate cortex, and cerebellum.

Other investigators have identified still other activated areas,


depending on task demands. Anthony Wagner and colleagues (2001)
presented participants with a single cue word and four target
words. The task was to indicate which target word was most closely
and globally related to the cue, thus measuring the participant’s
ability to retrieve meaningful information. An area in the le
premotor cortex just dorsal to Broca’s area became active during
this task (Figure 19.13B).

Alex Martin and his colleagues (1996) asked participants to name


tools or animals and subtracted activation produced by the brain
response to animals from the response to tools. Naming tools
activates a region of premotor cortex also activated by imagined
hand movements (Figure 19.13C). Finally, Antonio Damasio and
colleagues (1996) reported that naming persons, animals, and tools
activates specific locations in area TE, the inferotemporal lobe
(Figure 19.13D).

In summary, the results of imaging studies using subtractive


methods confirm that the classical anterior and posterior speech
zones identified by Broca and Wernicke are involved in language,
but they also implicate other regions. Imaging further suggests that
Wernicke’s area may deal largely with analyzing auditory input and
that Broca’s area does not simply represent speech movements but
also is involved in syntax and memory. Finally, the results provide
evidence that “language” is mapped onto circuits ordinarily engaged
in more primary functions: visual attributes of words are
represented in visual areas, auditory attributes are mapped onto
auditory brain regions, motor attributes are mapped onto motor
regions, and so on.
Neural Networks for Language
Hundreds of anatomical, brain-lesion, and brain-imaging studies
have demonstrated that language-related functions can be localized
to specific brain regions connected by neural pathways. It is less
clear whether these language-related regions are specialized for
language or serve other functions as well. In the 1800s, scientists
who mapped language using locations and connecting pathways
were criticized as “diagram makers.” This criticism did not deter
researchers, who reasoned that, even if a region has a primary role
in language, it is still appropriate to ask what localization means.
The word hammer, for example, can signify the object or the action
and can be a command or a question. Are all these meanings of
hammer localized in the same brain region, or are they located at
different places in the brain? How are the word and its many
meanings represented?

Many current language models are based on the idea that language
is widely distributed in cortical and other brain structures. Even
single words are widely distributed, which is one way they acquire
their many meanings. We describe two language-network models
that illustrate the distribution of this network in the cortex. Be
aware, however, that whereas computer networks are precise,
proposed language networks are speculative. First, it is difficult to
establish whether single neurons or groups of neurons are the
proper network elements; and second, information usually flows
one way in the linear programming models used for computer
networks, but network flow in the brain is two-way.

The Semantic Network


Based on a meta-analysis of 120 MRI studies, Binder and his
colleagues (2009) proposed a semantic network, a network engaged
in the meaning of words and sentences. The semantic network
includes seven regions of posterior and frontal cortex, diagrammed
in Figure 19.14: (1) the area around the arcuate gyrus (AG), a
supramodal region involved in the integration of complex
knowledge and its retrieval; (2) the lateral ventral temporal (LVT)
cortex, similarly involved in language comprehension; (3) the
ventral temporal (VT) cortex, involved in object categorization; (4)
the dorsal medial prefrontal (dmPF), involved in fluid semantic
word retrieval; (5) the inferior frontal gyrus (IFG), involved in
phonological working memory and syntactical processing; (6) the
ventromedial prefrontal (vmPF) cortex, involved in affective
processing such as the motivational and emotional significance of
words; and (7) the posterior cingulate gyrus (PCG), associated with
episodic and spatial word meaning. All of these regions are located
mainly in the le hemisphere. A subsequent version of the
semantic network includes additional visual, auditory,
somatosensory, and motor cortex regions as part of the semantic
network — regions likely involved in verbal actions associated with
speech. (This version, referred to as the brain dictionary, will be
discussed shortly.)

Figure 19.14

The Semantic Network Seven cortical regions proposed to form the semantic
cortical network include posterior regions (AG, arcuate gyrus; LVT, lateral ventral
temporal lobe; VT, ventral temporal lobe) that perform perceptual functions, and
anterior regions (dmPF, dorsomedial prefrontal; IFG, inferior frontal gyrus; vmPF,
ventromedial prefrontal cortex; PC, posterior cingulate cortex) that perform action
functions.
As suggested by this summary of the different contributions of the
anterior and posterior cortical regions of the semantic network,
posterior cortical regions are associated with perceptual knowledge of
language, including the interpretation of visual, auditory, and tactile
signals related to the construct of words and their meanings. The
anterior cortical regions are associated with action knowledge, such
as travel to various locations, the use of tools, and social
interactions.

The semantic network largely overlaps with two other brain


networks: the default network and the autobiographical memory
network. The relationship is not surprising. Language is likely a
prime vehicle for daydreaming, planning, and ruminating —
proposed functions for the default network. In addition, thinking
about past events, relating them to the present, and planning for
the future — functions of the autobiographical network — likely
require language. All of these functions might be related to theory
of mind, the ability to think of ourselves as conscious, thinking
individuals with a past, a present, and a future. These network
overlaps may suggest that the autobiographical and default
networks are the same network. Alternatively, the overlap may
occur because these networks use similar elements, though they
are not actually the same (Jackson et al., 2019).

The advantage of the semantic language network described here is


that it allows one to see at a glance the distribution of language
across the le hemisphere and to see that different parts of the
network serve different functions. By imagining that a network can
act alone or in cooperation with other modules, we can imagine
various degrees of language complexity, from distinguishing
phonemes to engaging in discourse. We can also make predictions,
for example, about what happens following damage to one or
another module or different combinations of modules.

The Brain Dictionary


Humans can name and respond to thousands of distinct object and
action categories, and investigators realize that it is unlikely that
each category is represented in its own brain region. A number of
recent studies have used a data-driven approach (the use of
computer analysis to identify relationship within large data sets) for
the analysis of fMRI activity associated with language. Researchers
ask participants to watch movies, listen to short narrative stories
(e.g., those from the Moth Radio Hour program), or read short
stories. The movies and stories are then segmented according to
every object or action that occurs. A computer correlates changes
in each voxel (a unit of area in brain imaging, similar to a pixel) in
relationship to every object or action to determine cortical activity
in the le and right hemispheres. Each voxel is about 2 × 2 × 4 mm
and measures a small region of cortical blood flow; there are about
1000 voxels across the cortex. Thus, the method is very detailed, but
nevertheless a single voxel comprises many hundreds of neurons.
The power of the data-driven method can be illustrated using an
example from Alexander Huth and coworkers (2012). They
identified 1705 objects or actions in movies watched by
participants, and they organized those objects and verbs into a
hierarchically arranged semantic tree using a program called
WordNet. This program groups similar words or actions together,
and the distances between clusters of words indicate the relative
strengths of the relationships. Huth and colleagues correlated the
response of each voxel with each object or action. Figure 19.15
illustrates the response of one voxel in the le -hemisphere
parahippocampal place area, a cortical region thought to be
responsive to scenes. This voxel shows increased responses to
scenes containing structures, roads, containers, devices, and
vehicles. It shows decreased responses to animals, plants, and body
parts. Thus, the voxel is responsive to scenes containing
manufactured objects but not to those containing living things or
parts of living things. The WordNet so ware displays the activity of
every voxel on a semantic cortical map using a color chart to
represent the entire cortical distribution of objects and actions.
Figure 19.15

The Response of a Single Voxel to Objects and Actions A continuous semantic space
describes the representation of thousands of object and action categories across the human
brain. The WordNet program arranges nouns (circles) and verbs (squares) in categories. Red
indicates positive responses, and blue indicates negative responses made by the voxel when
viewing movies. The area of each marker indicates response magnitude. The voxel mapped in
this figure is responsive to scenes containing constructed objects.

The procedure for semantic mapping from single-voxel analysis


described above has allowed researchers to create what is popularly
called the brain dictionary (Deniz et al., 2019). The brain dictionary
organizes and displays multiple semantic maps tiled across the
cortex, each map topographically representing related language
objects and actions. These semantic maps are analogous to the
retinotopic maps displayed in V1 and then represented in V2, V3,
and other visual areas across the posterior cortex (see Chapter 13).
Figure 19.16 shows that maps are well represented in five places in
the cortex and less well represented in two other cortical locations.
The maps are roughly located in the cortical regions that make up
the semantic network, and they may be the structural basis of the
semantic network.

Figure 19.16

Semantic Maps for Listening and Reading Semantic maps are displayed on the
cortical surface of one participant. The color wheel legend at the center
indicates the associated semantic concepts. Maps occur in the prefrontal cortex
(PVC), medial prefrontal cortex (MPC), lateral prefrontal cortex (LPF), lateral
temporal cortex (LTC), and ventral temporal cortex (VTC) but are less well-
defined in the auditory cortex (AC) and extrastriate cortex (ECT). (Reprinted by
permission of the Journal of Neuroscience, from Deniz, F., Nunez-Elizalde, A.O.,
Huth, A.G., Gallant, J.L. “The Representation of Semantic Information Across
Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus
Modality.” Journal of Neuroscience, 39:7722–7736, 2019, Figure 4. Permission
conveyed through Copyright Clearance Center, Inc.)

Just as each visual map serves a behavioral function, each semantic


map is similarly specialized. Whereas subtraction methods display
language functions mainly in the le hemisphere, the semantic
maps are almost equally represented in both hemispheres.
Although these maps are based on only a handful of participants,
the distribution and structure of maps are very similar across these
participants. The semantic maps suggest that objects and verbs are
continuously represented across the cortex. The more focal
representations of semantic information shown in Figure 19.15 are
likely representations of modal information that semantic maps
contain.

Nodes and Neural Webs for Language


The brain dictionary model suggests that the various meanings of a
word, such as hammer, are represented in different parts of the
cortex. For example, the image of a “hammer” may be represented
in structures of the visual cortex, the action of a “hammer” may be
located in the motor cortex, and so forth.
Riitta Salmelin and Jan Kujala (2006) suggest that meaning comes
through the connections (edges in network jargon; see Section 17.7)
between nodes proposed to comprise neural webs. Here again,
nodes can be single cells or collections of cells, and a web consists
of nodes and their two-way connections. The nodes and their
connections can be local or widely distributed across the cortex.
The idea is that by combining information from many parts of the
brain, individual words can take on many different meanings and
can represent language in its many forms (e.g., spoken or written).

Figure 19.17 illustrates some representative neural webs for


individual words. If a word contains visual content, the web
includes visual brain areas; if it contains motor content, the web
includes motor areas. Any given web includes nodes within primary
and secondary auditory areas as well as within primary and
secondary motor regions. The objective of creating neural webs to
represent language-related brain regions is not to eventually
produce a wiring diagram but rather to illustrate one way that the
brain might produce language. We can see from these examples
that language, even at the level of single words, is widely distributed
across the cortex.
Figure 19.17

Neural Webs for Language Tasks Nodes are symbolized by circles, and edges
(interconnecting axonal pathways) are represented by lines. In this model,
different word-related tasks use different neural webs. (Information from
Salmelin & Kujala, 2006.)
19.4 Language Disorders
Standard language function depends on the complex interaction of
sensory integration and symbolic association, motor skills, learned
syntactical patterns, and verbal memory. Aphasia may refer to a
language disorder apparent in speech, in writing (also called
agraphia), or in reading (also called alexia) produced by injury to
brain areas specialized for these functions. Thus, disturbances of
language due to severe intellectual impairment, to loss of sensory
input (especially vision and hearing), or to paralysis or
incoordination of the musculature of the mouth (called anarthria)
or hand (for writing) are not considered aphasic disturbances.
These disorders may accompany aphasia, however, and they
complicate its study.

Language disturbances have been divided into 10 basic types,


grouped by disorders of comprehension and disorders of
production in Table 19.2 (Goodglass & Kaplan, 1972). Most of these
disorders are described in the Part II chapters on parietal-,
temporal-, and frontal-lobe functions (see Chapters 14–16). The one
exception is paraphasia, the production of unintended syllables,
words, or phrases during speech. Paraphasia differs from
difficulties in articulation in that sounds are correctly articulated,
but they are the wrong sounds: people with paraphasia either
distort the intended word (e.g., pike instead of pipe) or produce a
completely unintended word (e.g., my mother instead of my wife).
Table 19.2 Summary of Symptoms of Language Disorders

Disorders of comprehension

Poor auditory comprehension

Poor visual comprehension

Disorders of production

Poor articulation

Word-finding deficit (anomia)

Unintended words or phrases (paraphasia)

Loss of grammar and syntax

Inability to repeat aurally presented material

Low verbal fluency

Inability to write (agraphia)

Loss of tone in voice (aprosodia)

Information from Goodglass & Kaplan, 1972.

Despite disagreement among experts concerning the number of


types of aphasias, certain classification systems are widely used,
such as the system originally described by Mazzocchi and Vignolo
(1979). Generally, aphasias can be divided into three broadly defined
categories: fluent aphasias, nonfluent aphasias, and pure aphasias.
Note that aphasias in each of these categories can include elements
related to both disorders of comprehension and disorders of
production.

Fluent Aphasias
Fluent aphasias are impairments related mostly to language input
or reception. Individuals with these impairments produce fluent
speech but have difficulties either in auditory verbal
comprehension or in repeating words, phrases, or sentences
spoken by others. To a listener who does not speak the language of
a fluent aphasic, it seems as though the patient is speaking easily
and correctly.

Wernicke aphasia, or sensory aphasia, is the inability to


comprehend words or to arrange sounds into coherent speech even
though word production remains intact. Alexander Luria proposed
that sensory aphasia has three characteristic deficits: in classifying
sounds, producing speech, and writing (Luria & Hutton, 1977).

Hearing and making sense of speech sounds demands the ability to


qualify sounds — that is, to recognize the different sounds in the
system of phonemes that are the basic speech units in a given
language. For example, in the Japanese language, the sounds l and r
are not distinguished; a Japanese-speaking person hearing English
cannot distinguish these sounds because the necessary template
was never laid down in the brain. Thus, although this distinction is
perfectly clear to English-speaking persons, it is not clear to native
Japanese speakers. Similarly, a person with Wernicke aphasia has,
in his or her own language, the inability to isolate the significant
phonemic characteristics and to classify sounds into known
phonemic systems. Thus, we see in Wernicke aphasia a deficit in
sound categorization.

The second characteristic of Wernicke aphasia is a speech defect.


The affected person can speak — and may speak a great deal — but
confuses phonetic characteristics, producing what is o en called
word salad, intelligible words that appear to be strung together
randomly. The third characteristic is a writing impairment. A
person who cannot discern phonemic characteristics cannot write
because he or she does not know the graphemes (pictorial or
written representations of a phoneme) that combine to form a
word.

Transcortical aphasia, sometimes called isolation syndrome, is


curious in that people can repeat and understand words and name
objects but cannot speak spontaneously, or they can repeat words
but cannot comprehend them. Comprehension might be poor
because words fail to arouse associations. Production of
meaningful speech might be poor because, even though word
production is at normal, words are not associated with other
cognitive activities in the brain.
Conduction aphasia is paradoxical: people with this disorder can
speak easily, name objects, and understand speech, but they cannot
repeat words. The simplest explanation for this problem is a
disconnection between the perceptual word image and the motor
systems producing the words.

People with anomic aphasia (sometimes called amnesic aphasia)


comprehend speech, produce meaningful speech, and can repeat
speech, but they have great difficulty finding the names of objects.
For example, we saw a patient who, when shown a picture of a ship
anchor, simply could not think of the name and finally said, “I know
what it does…. You use it to anchor a ship.” Although he had
actually used the word as a verb, he was unable to access it as a
noun. Difficulties in finding nouns appear to result from damage
throughout the temporal cortex. In contrast, verb-finding deficits
are more likely to come from le frontal injuries.

Although the extent to which the brain differentiates between


nouns and verbs may seem surprising, we can see that they have
very different functions, as suggested by the semantic maps
described in Section 19.3. Nouns are categorizers. Verbs are action
words that form the core of syntactical structure. It makes sense,
therefore, to find that they are separated in such a way that nouns
are a property of brain areas controlling recognition and
classification, and verbs are a property of brain areas controlling
movement.
Nonfluent Aphasias
Nonfluent aphasias are difficulties in articulating but are
accompanied by relatively good auditory verbal comprehension. In
nonfluent aphasia (also called Broca aphasia, or expressive
aphasia), a person continues to understand speech but has to labor
to produce it: the person speaks in short phrases interspersed with
pauses, makes sound errors, makes repetitious errors in grammar,
and frequently omits function words. Only the key words necessary
for communication are used. Nevertheless, the deficit is not one of
making sounds but rather of switching from one sound to another.

Nonfluent aphasia can be mild or severe. In one form, transcortical


motor aphasia, repetition is good, but spontaneous production of
speech is labored. In global aphasias, speech is labored and
comprehension is poor.

Pure Aphasias
The pure (or selective) aphasias include alexia, an inability to read;
agraphia, an inability to write; and word deafness, in which a person
cannot hear or repeat words. These disorders may be restricted to a
specific ability. For example, a person may be able to read but not
write or may be able to write but not read.
19.5 Localization of Lesions in
Aphasia
Students who are just beginning to study the neural bases of
language are intrigued by the simplicity of the Wernicke–
Geschwind model, where Wernicke’s area is associated with speech
comprehension, Broca’s area is associated with speech production,
and the fibers connecting them translate meaning into sound (see
Figure 19.6). As earlier sections explain, however, the neural
organization of language is more complex and requires
consideration of the brain’s many pathways and anatomical regions
related to language.

Research now shows that selective damage to Broca’s area and


Wernicke’s area do not produce Broca aphasia or Wernicke aphasia.
For example, a er a stroke limited to Broca’s area, a person may
have only a slight impairment in articulating words. Damage
outside Broca’s area produces Broca aphasia, but generally the
condition is produced only by quite large lesions (Figure 19.18). In
addition, people who display the articulation deficits associated
with Broca aphasia typically display cognitive and memory deficits
as well.
Figure 19.18

Stroke Outside Broca’s Area Produces Broca Aphasia Overlapping lesions of 36


people with Broca aphasia lasting more than 1 year. The 100% overlap area (red)
covers an area outside Broca’s area (square), and lesions are generally quite large.
(Information from Dronkers et al., 2017.)

Several factors complicate the study of the neural basis of language.


As we have seen from sematic maps of language, most of the brain
takes part in language in one way or another. It makes sense that a
behavior as comprehensive and complex as language would not be
the product of some small, circumscribed region of the brain.
Furthermore, most of the patients who contribute information to
studies of language have had strokes, usually of the middle cerebral
artery (MCA). Figure 19.19A illustrates the location of this artery
and its tributaries. Because stroke results from arterial blockage or
bleeding, all core language areas may be damaged or only smaller
regions may be damaged, depending on where stroke occurs.
Individual differences in the tributary pattern of the MCA add to the
variation seen in stroke symptoms and outcomes. The artery
supplies subcortical areas as well, including the basal ganglia, a
region that includes the caudate nucleus and is important in
language (Figure 19.19B). Immediately following stroke, symptoms
are generally severe, but they improve considerably as time passes.
Thus, the symptoms cannot be easily ascribed to damage in a
particular brain region. Finally, aphasia syndromes described as
nonfluent (Broca) or fluent (Wernicke) consist of numerous varied
symptoms, each of which may have a different neural basis.
Figure 19.19

Middle Cerebral Artery The amount of damage to the cortex due to blockage or
bleeding of the middle cerebral artery (red) can vary widely in the neocortex (A) and
the basal ganglia (B), depending on the location of the blockage or bleeding.

Cortical Language Components


In studying a series of stroke patients with language disorders, Nina
Dronkers and her coworkers (1999) correlate different symptoms of
nonfluent and fluent aphasia with specific cortical regions. Their
analysis suggests that nonfluent aphasia consists of at least five
kinds of symptoms: apraxia of speech (difficulty in producing
sequences of speech sounds), impairment in sentence
comprehension, recurring utterances, impairment in articulation of
sounds, and impairment in working memory for sentences.

Their analysis suggests that the core deficit, apraxia of speech,


comes from damage to the insula. Impairments in sentence
comprehension seem to be associated with damage to the dorsal
bank of the superior temporal gyrus and the middle temporal gyrus;
recurring utterances seem to stem from damage to the arcuate
fasciculus; and impairments in working memory and articulation
seem to be associated with damage to ventral frontal cortex.

Concerning fluent aphasia, Dronkers and her colleagues propose


that most of the core difficulties, especially the lack of speech
comprehension, come from damage to the medial temporal lobe
and underlying white matter. Damage in this area not only destroys
local language regions but also cuts off most of the occipital,
temporal, and parietal regions from the core language region. The
researchers also propose that damage to the temporal cortex
contributes to deficits in holding sentences in memory until they
can be repeated. Thus, these patients appear to have impairment in
the “iconic” memory for sounds but are not impaired in
comprehension.

Subcortical Language Components


At the same time that Broca was describing a cortical center for
speech control, Hughlings-Jackson (1932) proposed that subcortical
structures are critical to language. In 1866 he wrote: “I think it will
be found that the nearer the disease is to the basal ganglia, the
more likely is the defect of articulation to be the striking thing, and
the farther off, the more likely it is to be one of mistakes of words.”
Yet when Alison Rowan and her colleagues (2007) used MRI and
behavioral tests specifically to examine the language abilities of
young patients who had had a basal ganglia stroke, they concluded
that the language deficits most likely derive from damage to the
neocortex.

Other evidence indicates that the thalamus participates in language.


Findings by George Ojemann (2003), in which the thalamus was
electrically stimulated, indicate that the pulvinar nucleus and the
lateral-posterior–lateral-central complex of the le thalamus have a
role in language that is not common to other subcortical structures.
Stimulation of the le -ventrolateral and pulvinar thalamic nuclei
produced speech arrest, difficulties in naming, perseveration, and
reduced talking speed. Stimulation of the thalamus has also been
reported to have a positive effect on memory because it improves
later retrieval of words heard during the stimulation. As a result,
some researchers propose that the thalamus influences language
function by activating or arousing the cortex.

When the thalamus is damaged by electrical current applied for the


treatment of abnormal movements, a variety of speech and
language disturbances have been found in association with lesions
of the le -ventrolateral thalamus or the pulvinar nucleus or both.
Symptoms include postoperative dysphasia, which features
transitory symptoms of increased verbal-response latency,
decreases in voice volume, alterations in speaking rate and slurring
or hesitation in speech, and impaired performance on tests of
verbal IQ and memory.

Right-Hemisphere Contributions to
Language
Although it is well established that the le hemisphere of right-
handed people is dominant in language, the right hemisphere also
has language abilities. The best evidence comes from studies of
split-brain patients in whom the linguistic abilities of the right
hemisphere have been studied systematically with the use of
various techniques for lateralizing input to one hemisphere (such
as shown in Figure 11.9).
The results of these studies show that the right hemisphere has
little or no speech but surprisingly good auditory comprehension of
language, including for both objects and actions. There is some
reading ability but little writing ability in the right hemisphere. In
addition, although the right hemisphere can recognize words
(semantic processing), it has little understanding of grammatical
rules and sentence structures (syntactical processing).

Complementary evidence of the right hemisphere’s role in language


comes from studies of people who have had le hemispherectomies.
If the le hemisphere is lost early in development, the right
hemisphere can acquire considerable language abilities (as detailed
in the Chapter 10 Portrait), although people with le
hemispherectomies are by no means typical. Le hemispherectomy
in adulthood produces severe deficits in speech but leaves
surprisingly good auditory comprehension. Reading ability is
limited, and writing is usually absent. In general, le
hemispherectomy appears to result in language abilities
reminiscent of those achieved by the right hemisphere of
commissurotomy patients.

The effects of right-hemisphere lesions on language functions


provide further indication that the right hemisphere is capable of
language comprehension, especially of auditory material, even
though it usually does not control speech. For example, aphasia is
rare a er right-hemisphere lesions, even a er right
hemispherectomy (some le -handers excepted), but more subtle
linguistic impairments bubble up, including changes in vocabulary
selection, in responses to complex statements with unusual
syntactical construction, and in comprehending metaphors. In
addition, right orbitofrontal lesions reduce verbal fluency and lead
to deficits in prosody — both for comprehending tone of voice and
for producing emotional vocal tone.

The contrasts in right- and le -hemisphere functioning in language


have been summarized as follows. The partner of a patient with
Broca aphasia comments that the patient understands everything
said, even though the patient is unable to match spoken words with
their pictured representations and cannot follow two-step
commands. The partner of a patient with an equivalent right-
hemisphere lesion comments that the patient has difficulty
following a conversation, makes irrelevant remarks, and generally
seems to miss the point of what people are saying, even though this
patient performs quite well on the same tests failed by the patient
with a le -hemisphere lesion.

Thus, the right hemisphere has considerable language


comprehension, whereas the le hemisphere’s major contribution
to language is syntax (Table 19.3). Syntax has many components,
including producing, timing, and sequencing movements required
for speaking as well as understanding the rules of grammar.
Table 19.3 Language Activities of the Two Hemispheres

Function Le hemisphere Right hemisphere

Gestural language + +

Prosodic language

Rhythm ++

Inflection + +

Timbre + ++

Melody ++

Semantic language

Word recognition + +

Verbal meaning ++ +

Concepts + +

Visual meaning + ++

Syntactical language

Sequencing ++

Relations ++

Grammar ++

Reprinted from Cortex, Vol. 22, Benson, D. F., Aphasia and lateralization of language, pages
71–86, © 1986, with permission from Elsevier.
19.6 Neuropsychological
Assessment of Aphasia
The neuropsychological assessment of aphasia has changed greatly
with the advent of brain imaging. Comprehensive test batteries
were once used to localize brain lesions and to establish a
standardized, systematic procedure for assessing aphasia, both to
provide clinical descriptions of patients and to facilitate
comparison of patient populations. With the advent of brain
imaging, a brain lesion can be quickly and accurately localized, but
there remains a need for tests that can be administered quickly (in
under 30 minutes) while still providing detailed information. An
additional problem is that although there is no shortage of tests, the
adequacy of all current tests has been called into question (Rohde
et al., 2018). Interestingly, simply measuring timing pauses in
speech has been suggested as a test of language impairment
(Angelopoulou, 2018). Finally, therapeutic approaches are now
more directed toward individual symptoms, such as those listed in
Table 19.2, than to assessment of overall conditions.

Table 19.4 summarizes a few of the numerous tools for assessing


aphasia (Lezak et al., 2012), with basic references. The first group,
aphasia test batteries, contains varied subtests that systematically
explore the subject’s language capabilities. They typically include
tests of (1) auditory and visual comprehension; (2) oral and written
expression, including tests of repetition, reading, naming, and
fluency; and (3) conversational speech.

Table 19.4 Summary of Major Neuropsychological Tests of


Aphasia

Test Basic reference

Aphasia test batteries

Boston Diagnostic Aphasia Test Goodglass & Kaplan, 1972

Functional communicative profile Sarno, 1969

Neurosensory center comprehensive examination for aphasia Spreen & Benton, 1969

Porch Index of Communicative Ability Porch, 1967

Minnesota Test for Differential Diagnosis of Aphasia Schuell, 1965

Wepman–Jones Language Modalities Test for Aphasia Wepman & Jones, 1961

Aphasia screening tests

Conversation analysis Beeke et al., 2007

Halstead–Wepman Aphasia Screening Test Halstead & Wepman, 1959

Token Test de Renzi & Vignolo, 1962

Because test batteries have the disadvantages of being lengthy and


requiring special training for those who administer them, some
brief aphasia screening tests also have been devised, including
conversational analysis and some simpler formal tests listed under
the second group in Table 19.4. The Halstead–Wepman Aphasia
Screening Test and the Token Test are o en used as part of standard
neuropsychological test batteries because they are short and easy to
administer and score. Screening tests do not take the place of the
detailed aphasia test batteries, but they offer efficient means of
discovering the presence of a language disorder. If a detailed
description of the linguistic deficit is then desired, the more
comprehensive aphasia batteries may be given. Although
theoretical models and test batteries may be useful for evaluating
and classifying the status of a patient with aphasia, they are no
substitute for continued experimental analysis of language
disorders. It is likely, however, that these efforts will be closely tied
to brain analytical methods that use voxel-based fMRI and to data
analysis of results from a large number of subjects (Liew et al.,
2018).

Whereas the test batteries attempt to classify patients into a


number of groups, psychobiological approaches concentrate on
individual differences and peculiarities and, from these differences,
attempt to reconstruct the brain’s language-producing processes.
On the practical side, John Marshall (1986) notes that only about
60% of aphasic patients will fit into any contemporary classification
scheme. Similar inadequacies have been noted in other
classification methods.
Acquired Reading Disorders
Acquired reading disorders may follow brain injury such as stroke.
Impairments in reading are usually grouped together under the
term dyslexia (for a reading impairment). Assessment of reading
disorders, detailed in Section 24.6, is a specialized branch of
language research for several reasons. First, analyzing reading is
more objective than analyzing writing and speaking. Second, a large
pedagogical science of reading is available. Finally, these disorders
are common and require diagnosis and remediation.

Model building is one objective approach to studying reading. A


model is much like an algorithm — a set of steps to follow to answer
a question. Reading models are used to test people with reading
disabilities, both as a way of defining the impairment and as a way
of testing the model’s utility.

The model-building approach views reading as being composed of a


number of independent skills or subsystems, one or another of
which may not be functioning in an impaired reader. The model-
building approach thus differs from classical neurological
approaches in two ways: (1) the latter define dyslexia according to
whether it arises in conjunction with other disorders, such as
dysgraphia or dysphasia, and (2) the primary intent of the model-
building approach is to correlate the impairment with the locus of
brain damage.
Analyzing Acquired Dyslexia
The model-building approach can be traced to an analysis by James
Hinshelwood (1917), in which he identified different types of
reading disorders: (1) the inability to name letters (letter blindness),
(2) the inability to read words (word blindness), and (3) the inability
to read sentences (sentence blindness). Hinshelwood’s taxonomy
and its subsequent elaboration led to the current hypothesis that
reading is composed of a number of independent abilities that may
each have an independent anatomical basis.

Figure 19.20 charts a series of questions that an examiner might


ask in order to identify the following impairments:

1. With attentional dyslexia, when one letter is present, letter


naming is standard, and when more than one letter is present,
letter naming is difficult. Even if a tester points to a letter that
is specially colored, underlined, and has an arrow pointing to
it, the individual being tested may name it incorrectly when it
is among other letters. The same phenomenon may occur for
words when more than one word is present.
2. Persons displaying neglect dyslexia may misread the first half
of a word (e.g., reading whether as smother) or may misread the
last part of a word (e.g., reading strong as stroke).
3. With letter-by-letter reading, affected persons read words only by
spelling them out to themselves (aloud or silently). Silent
spelling can be detected by the additional time required for
reading long words. Frequently, an affected person can write
but then has difficulty reading what was written.
4. The key symptoms of deep dyslexia are semantic errors; that
is, persons with deep dyslexia read semantically related words
in place of the word that they are trying to read (e.g., tulip as
crocus and merry as Christmas). Nouns are easiest for them to
read, followed by adjectives and then verbs (function words),
which present the greatest difficulty. Those with deep dyslexia
find it easier to read concrete words than abstract ones and are
completely unable to read nonsense words. They are also
generally impaired at writing and in short-term verbal memory
(digit span).
5. The one symptom of phonological dyslexia is inability to read
nonwords aloud; otherwise, reading may be nearly flawless.
6. Individuals with surface dyslexia cannot recognize words
directly but can understand them by using letter-to-sound
relations if they sound out the words. This reading procedure
works well as long as the words are regular and can be sounded
out (e.g., home, dome) but not if the words are irregular (e.g.,
come will be read as comb). Regular words have consistent
phoneme–grapheme relationships, whereas irregular words do
not and must be memorized. Spelling is also impaired but is
phonetically correct. Surface dyslexia does not develop in
languages that are totally phonetic and sounded out as they are
written (e.g., Italian). Surface dyslexia is a common symptom of
children who have difficulty learning to read.
Figure 19.20

Analyzing Acquired Dyslexia. (Information from Coltheart, 2005.)

The Dual-Route Theory


Central to the model-building idea of reading is the dual-route
theory, which proposes that reading written language is
accomplished by use of two distinct but interactive procedures, the
lexical and nonlexical routes. Reading by the lexical route relies on
the activation of orthographic (picture) or phonological (sound)
representations of a whole word. The lexical route can process all
familiar words, both regular and irregular, but it fails with
unfamiliar words or nonwords because it lacks a means for
representing them.

In contrast with the whole-word retrieval procedure used by the


lexical route, the nonlexical route uses a subword procedure based
on sound-spelling rules. The nonlexical route can succeed with
nonwords (e.g., klant) and regular words that obey letter–sound
rules, but it cannot succeed with irregular words that do not obey
these rules (e.g., winding, choir).

According to the dual-route theory, typical readers compute sense


and sound in parallel, whereas, in a dyslexic reader, one process or
the other may be absent. In deep dyslexia, a patient is unable to
process for sound and reads for sense. The patient may misread the
word bird as butterfly, both words referring to flying animals. In
surface dyslexia, a patient is able to process for sound but not for
sense. The patient might pronounce English words correctly and
may even read fluently but still not realize what he or she is saying.
Stephen Rapcsak and his colleagues (2007) propose that the dual-
route theory is effective in diagnosing both developmental and
acquired dyslexia.

Figure 19.21 charts a model illustrating the dual-route theory. Note


the quite separate ways of obtaining speech from print and a still
different way of producing letter names. The important features of
the dual-route approach are that it does not depend on function–
anatomy relationships, it can be applied to language disorders other
than dyslexia, and it can lead to hypotheses concerning the
anatomical organization of language.
Figure 19.21

Dual-Route Model Speech from print can follow a number of routes and can be
independent of comprehension or pronunciation. (Information from Coltheart, 2005.)
SUMMARY
19.1 Language consists of methods of social
information sharing
Language allows humans to organize sensory inputs by assigning
tags to information. Tagging allows us to categorize objects and
ultimately concepts and to speak to ourselves about our past and
future. Language also includes the unique motor act of producing
syllables as well as the ability to impose grammatical rules. Both
dramatically increase the functional capacity of language.

19.2 Language may be a hominin trait


Continuity theorists propose that language has a long evolutionary
history; discontinuity theorists propose that language evolved
suddenly in modern humans. The evolution of language represents
not the development of a single ability but rather the parallel
development of multimodal processes. Investigations of language
origins are directed toward understanding the component skills
necessary for language, the genes that contribute to languagelike
processes, and the expression and evolution of language in different
animal species. Evolutionary investigations suggest that language
origins preceded modern humans.

19.3 Language is a function of the entire brain


Language functions occupy a vast cortical area. Functions such as
generating verbs versus nouns or understanding visual versus
auditory information are found in precise locations. Language, like
other cerebral functions, is organized in a series of parallel
hierarchical channels that can be modeled as neural networks. The
semantic networks consists of a number of semantic maps that
generate phonemes and syntax across cortical regions, even at the
level of individual words.

19.4 Language disorders reflect cortical semantic


areas
Traditional classifications of language disorders characterize fluent
aphasias as a symptom of posterior cortical damage, in which
speech can be expressed. They classify nonfluent aphasias as a
symptom of anterior cortical damage, in which speaking is
impaired; and pure aphasias, which may be highly selective.
Various combinations of fluent and nonfluent types are identified,
depending on the disorder and the location and extent of brain
injury.

19.5 Dual neural systems underlie the use of


language
One contemporary language model proposes paired dorsal and
paired ventral language pathways connecting temporal- and frontal-
lobe language areas. The dorsal pathways mediate phonology, and
the ventral pathways mediate semantics. Both pairs of pathways are
involved in short- and long-term memory for language. Subcortical
structures and the right hemisphere also contribute to language,
revealing its wide distribution through the brain.

19.6 Neuropsychological assessments describe


language disorders
Assessment tools developed to describe language disorders include
tests of perceptual disorders, disorders of comprehension, and
disorders of speech production. The complexities of language
render it difficult to group every disorder with any single
assessment tool.

Reading analysis lends itself to a model-building approach. Dual-


route theory proposes that reading can be accomplished in two
ways: by either (1) a lexical route in which words are recognized as
wholes or (2) a nonlexical approach in which words are recognized
by using letter–sound rules. Acquired or developmental dyslexia
can include impairments in lexical routes, nonlexical routes, or
both.

Key Terms
agraphia
alexia

You might also like