Kolb 2021 - Fundamentals of Human Neuropsychology, 8th Ed. (1) - 1615-1708
Kolb 2021 - Fundamentals of Human Neuropsychology, 8th Ed. (1) - 1615-1708
Language
PORTRAIT
Multilingual Meltdown
K.H., a Swiss-born architect, was a professor of architecture at a major U.S. university.
Although German was his first language and he was fluent in French and Italian, his primary
language had become English.
He had been an outstanding student, excelling at writing and meticulous about his spelling
and grammar. When his mother complained that he was making spelling and grammatical
errors in his letters to her, written in German, he was astonished. He suspected that he was
forgetting his German and resolved to prevent that from happening.
A few weeks later, K.H. asked a colleague to review a manuscript, in English, that he had just
completed. His colleague commented that K.H. must be working too hard because the
manuscript was filled with uncharacteristic errors. At about the same time, K.H. noticed that
the right side of his face felt “funny.” A neurologist found a small tumor at the junction of the
motor–face area and Broca’s area in the le hemisphere.
The tumor, which was benign, was removed surgically. In the first few days a er the surgery,
K.H. was densely aphasic: he could neither talk nor understand oral or written language.
Although he had been warned that aphasia was likely and that it would be temporary, he was
visibly upset. By the end of the first week, he could understand oral language, but his speech
was still unintelligible, and he could not read.
By the end of the second week, K.H. was speaking German fluently but had difficulty with
English, although it was certainly understandable to him. He was still unable to read in any
language but believed that he could read German and could be convinced otherwise only
when he was informed that the book he was supposedly reading was upside down! His
reading and English slowly improved, but even now, years later, K.H. finds spelling in any
language difficult, and his reading is slower than would be expected for a person of his
intelligence and education.
Language Structure
Like most other people, you probably think of words as the
meaningful units of language. Linguists break down language
differently, as summarized in Table 19.1. They view words as
consisting of fundamental language sounds, called phonemes, that
form words or parts of words. Phonological analysis determines
how we link phonemes together.
Lexicon Collection of all words in a given language; each lexical entry includes all
information with morphological or syntactical ramifications but does not
include conceptual knowledge
Semantics Meanings that correspond to all lexical items and all possible sentences
Prosody Vocal intonation — the tone of voice — which can modify the literal
meaning of words and sentences
Producing Sound
The basic anatomy that enables humans to produce sound consists
of two sets of parts, one set acting as the sound source and the
other as filters, as modeled in Figure 19.1A and charted in Figure
19.1B. Air exhaled from the lungs drives oscillations of the vocal
cords (or vocal folds), folds of mucous membrane attached to the
vocal muscles, located in the larynx, or “voice box,” the organ of
voice. The rate of vocal-fold oscillation (from about 100 Hz in adult
men, 150 to 250 Hz in women, and up to 500 Hz in small children)
determines the pitch (low to high frequency) of the sound
produced.
Figure 19.1
Vocal Production (A) Modeling how the vocal tract filters speech sound energy from the vocal
cords to produce formants. (B) Flowchart for speech production: the larynx is the source of
sound energy, and the vocal tract filters (shapes) the energy to produce the final sound
output, speech. (C) Cross-sectional views comparing larynx and vocal-tract position in a
chimpanzee and a human. (Information from Fitch, 2000.)
Acoustical energy thus generated then passes through the vocal
tract (pharyngeal, oral, and nasal cavities) and finally out through
the nostrils and lips. As this energy passes through the vocal tract,
the structures in the vocal tract filter (or shape) sound waves
specific to each sound, called formants. Formants modify the
emitted sound, allowing specific frequencies to pass unhindered
but blocking transmission of others. (Review the spectrogram
representation in Figure 15.13.) Filtering plays a crucial role in
speech: the length and shape of the vocal tract determine formant
characteristics, which are modified rapidly during speech by the
movements of the articulators (tongue, lips, so palate, and so on).
Formants emphasize sound frequencies that are meaningful in
speech.
Categorization
Multiple parallel hierarchical neural networks function to process
incoming sensory stimulation. As the cortex expands through our
early development and the number of networks that process
parallel sensory information increases, binding (integrating) the
information into a single perception of reality becomes more
difficult. The brain must determine which of myriad kinds of
sensory information reaching the cortex correspond to a given
object in the external world. Thus, it becomes necessary to
categorize information — for example, to tag some qualities as
belonging to plants and others as belonging to animals.
Assigning tags to information makes it easier to perceive the
information and to retrieve it later when needed, as we do, for
example, with cladograms used to classify plants and animals (see
Figure 2.1). The ventral visual stream coursing through the
temporal lobes participates in object categorization, and the dorsal
stream may also participate by making relatively automatic
distinctions between objects, such as plant versus animal or human
versus nonhuman. (In Section 19.3, we describe how the brain
categorizes language into semantic maps.)
Labeling Categories
Words are the ultimate categorizers, and the use of words as tags to
label categories rests on a preexisting perception of what categories
are. The development of human language may have entailed
selection for novel means of categorization that not only allowed for
combining and grouping simple sensory stimuli but also provided a
means of organizing events and relationships.
Sequencing Behavior
Human language employs transitional larynx movements to form
syllables. Le -hemisphere structures associated with language
form part of a system that has a fundamental role in ordering vocal
movements, such as those used in speech. We can also sequence
face, body, and arm movements to produce nonverbal language.
Sequencing words to represent meaningful actions likely makes use
of the same dorsal-stream frontal cortex circuits that sequence
motor action more generally.
Mimicry
Mimicry fosters language development. Athena Vouloumanos and
Janet Werker (2007) have found that from birth, babies show a
preference for listening to speech over other sounds. When they
begin to babble, they are capable of making the sounds used in all
languages. They also mimic and subsequently prefer the language
sounds made by the people in their lives. Over the course of their
early development, infants also lose the ability to distinguish
sounds that they no longer use. According to some estimates, in the
formative years, by mimicking the speech of others, children may
add as many as 60 new words each day to their vocabularies. Our
mirror system neurons respond when we see others make
movements and also when we make the same movements (see
Section 9.1). One explanation of mimicry is that mirror neurons in
the cortical language regions are responsible for our ability to
mimic the sounds, words, and actions that comprise language.
19.2 Searching for the Origins of
Language
Two theoretical approaches attempt to explain human language
origins. Discontinuity theories propose that language evolved
rapidly and appeared suddenly, occurring in modern humans in the
past 200,000 years or so. Continuity theories propose that language
evolved gradually, possibly by modifications of communication
common to many animal species that then developed into language
in ancestral hominin species. The Snapshot describes how one gene
that has been related to human language bears on these theories.
SNAPSHOT
Genetic Basis for an Inherited Speech and Language
Disorder
Almost half of the members of three generations of the KE family are affected by a severe
disorder of speech and language inherited as an autosomal (non–sex chromosome)
dominant trait in the forkhead-box protein 2 gene (FOXP2) (Vargha-Khadem et al., 2005). The
impairment, displayed by 15 of 37 family members (Figure A), is best characterized as a
deficit in sequencing articulation patterns, rendering speech sometimes agrammatical and
o en unintelligible. The orofacial aspect affects the production of sound sequences, which
makes the deficit resemble Broca aphasia. The affected members are impaired on tests of
mouth movement (oral praxis), including simple movements of clicking the tongue and
movement sequences (e.g., blowing up the cheeks, then licking the lips, and then smacking
the lips).
Figure A
MRI analysis of the affected family members’ brains showed significantly less gray matter in
a number of regions of the neocortex, including the motor regions of the neocortex, along
with increased gray matter in a number of other regions of the neocortex, especially the
somatosensory and auditory regions (Figure B). Many of these brain regions are associated
with producing facial movements necessary for language, among other sensory–motor
behaviors.
Figure B
Brain regions containing altered gray matter in members of the KE family affected by
the FOXP2 gene mutation. Blue represents a decrease in gray matter, and red represents
an increase in gray matter. (Research from Co et al., 2020.)
The gene that makes the forkhead-box protein 2 is implicated in this disorder. The FOXP2
gene regulates the expression of more than 300 other genes during development and during
learning, mainly by blocking their expression. The genes regulated by FOXP2 are present in
different brain regions and also other body regions, including the lungs. Study of the FOXP2
gene has led to the study of related genes, including the FOXP1 and FOXP4 families of genes.
The FOXP genes are especially active during development, suggesting their involvement in
neocortical development, and they remain active throughout life, suggesting an ongoing role
in cortical function. Mutations of FOXP genes have also been implicated in some cases of
autism spectrum disorder, attention-deficit/hyperactivity disorder, and other
neurodevelopmental disorders ( Co et al., 2020).
The FOXP2 gene is highly conserved in that it is similar in many nonhuman animal species,
where it also plays a role in the development of many parts of the brain as well as other body
organs. The FOXP2 gene is expressed in brain areas that regulate song learning in birds,
singing in whales, and ultrasonic vocalizations in mice. FOXP2 gene mutations in these
species affect a number of aspects of behavior, including sound production.
The FOXP2 gene has undergone two mutations over the course of hominin evolution. Such
rapid evolution suggests that these mutations may have altered neuronal circuitry in the
brain’s sensorimotor regions to enable movements that may have originally been related to
mouth movements for eating and drinking to contribute to human speech.
Co, M., Anderson, A. G., & Konopka, G. (2020). FOXP transcription factors in vertebrate brain
development, function, and disorders. Wiley Interdisciplinary Reviews of Developmental
Biology, 30, e375. https://doi.org/10.1002/wdev.375.
Vargha-Khadem, F., Gadian, D. G., Copp, A., & Mishkin, M. (2005). FOXP2 and the
neuroanatomy of speech and language. Nature Reviews Neuroscience, 32, 131–138.
Watkins, K. E., Dronkers, N. F., & Vargha-Khadem, F. (2002). MRI analysis of an inherited
speech and language disorder: structural brain abnormalities. Brain, 125(3), 465–478.
Discontinuity Theories
Discontinuity theories emphasize the syntax of human languages
and propose that language arose quite suddenly in modern humans
(Berwick et al., 2013). One emphasis is recognizing the unique
species-specific “computational core” of human language — its
sounds, syntax, and semantics.
Continuity Theories
Continuity theories also consider many lines of evidence, including
the adaptation of animal vocalization for language (Schoenemann,
2012). Perhaps it is a tribute to the imagination with which
speculators approached the question of which vocalizations that in
1866, the Linguistic Society of Paris banned future discussion of
vocalization theory. We will not let that ban deter us.
Precursors of Language Chimpanzee calls and the emotion or feeling with which
they are most closely associated. (Goodall, J. The Chimpanzees of Gombe.
Cambridge, Mass.: Harvard University Press, 1986. Permission granted by The Jane
Goodall Institute.)
Gestures and other visual cues also play a significant role in the
listening process. You may be familiar with the cocktail-party effect.
When listening to speech in a noisy environment, we can “hear”
what a speaker is saying much better if we can see the speaker’s
lips. A phenomenon called the McGurk effect, a er its originator,
Harry McGurk (Skipper et al., 2007), offers another demonstration
of “seeing” sounds. When viewers observe a speaker say one word
or syllable while they hear a recording of a second word or syllable,
they “hear” the articulated word or sound that they saw and not the
word or sound that they actually heard. Or they hear a similar but
different word entirely. For example, if the speaker is mouthing “ga”
but the actual sound is “da,” the listener hears “ga” or perhaps the
related sound “ba.” The McGurk phenomenon is robust and
compelling.
You may also have read the transcript of a conversation that took
place between two or more people. It can seem almost
incomprehensible. Had you been present, however, your
observation of the speakers’ accompanying gestures would have
provided clarity. Taken together, studies on vocalization and studies
on gestures, including signing, show that communication is more
than vocalization and that what makes humans special is the degree
to which we communicate.
Experimental Approaches to
Language Origins
Research on language origins considers the many types of
communication different animal species employ, including
birdsong, the elaborate songs and clicking of dolphins and whales,
and the dances of honeybees. Each contains elements of the core
skills underlying language. Languagelike abilities are present in
many different brains, even brains extremely different from our
own.
Language in Birds The African gray parrot Alex, with researcher Irene Pepperberg
and a sampling of the items he could count, describe, and answer questions about.
Alex died in 2007, aged 31.
Exemplars from American Sign Language The Gardners and others taught such
symbols to the chimpanzees in their studies. (Information from Gustason et al., 1975.)
Figure 19.5
Lana’s Keyboard Yerkish consists of nine basic design elements that are
combined to form lexigrams. The photo above shows Lana at the Yerkish
keyboard. An example of these characters is also shown: Design elements (A) are
combined to form lexigrams (B).
Lana had simply to type out her messages on the keyboard. She was
trained first to press keys for various single incentives. The
requirements became increasingly complex, and she was taught to
compose statements in the indicative (“Tim move into room”), the
interrogative (“Tim move into room?”), the imperative (“Please Tim
move into room”), and the negative (“Don’t Tim move into room”).
Eventually, Lana was composing strings of six lexigrams.
Kanzi, introduced earlier in this section, spontaneously learned to
communicate using Yerkish by watching his mother Matatla’s failed
training sessions. Kanzi’s knowledge of English words has exceeded
his knowledge of Yerkish lexigrams. To facilitate his learning, his
keyboard was augmented with a speech synthesizer. When he was 6
years old, Kanzi was tested on his comprehension of multisymbol
utterances. He responded correctly to 298 of 310 spoken sentences
of two or more utterances. Joel Wallman (1992) concluded that
Kanzi’s use of lexigrams constitutes the best evidence available to
date for the referential application of learned symbols by an ape.
Language in Hominins
Mounting evidence indicates that Neanderthals and some of our
other hominin cousins were more similar to us in language ability
than they were different. Dediu and Levinson (2013) summarize
evidence from discoveries related to Neanderthal culture and
genetics to argue that they likely shared similar language abilities.
This pushes the origins of language back 800,000 years or so, to the
common ancestor of Neanderthal and modern humans. Having
made such a grand leap, it may not be farfetched to consider that
verbal language is a property of the hominin brain. As such, it may
have become a primary driver in hominin evolution.
Wernicke–Geschwind Model
Broca and Wernicke identified speech areas in patients who had
lesions from stroke. Wernicke’s early neurological model of
language and its revival in the 1960s by Norman Geschwind, as the
Wernicke–Geschwind model, were both based entirely on lesion
data. As diagrammed in Figure 19.6, the three-part model proposes
that comprehension is (1) extracted from sounds in Wernicke’s area
and (2) passed over the arcuate fasciculus pathway to (3) Broca’s
area, to be articulated as speech. Other language functions access
this comprehension–speech pathway as well. The Wernicke–
Geschwind model has played a formative role in directing language
research and organizing research results. More recent research
suggests, however, that although the terms Broca aphasia (loss of the
ability to produce written, spoken, or gestural language) and
Wernicke aphasia (loss of the ability to comprehend speech) may be
useful descriptors of the effects of stroke to anterior and posterior
cortical regions, they are less useful in designating the brain areas
involved in language. That is because the posterior and anterior
language zones are more complex than anticipated by the
Wernicke–Geschwind model.
Figure 19.6
Subdivision of Broca’s Area Anatomical map of Broca’s area that includes areas 44
and 45 with subdivisions, three subdivisions added to area 6, and a plethora of
smaller regions. (Research from Amunts et al., 2010.)
Posterior Cortical Regions for Phonemes and Semantics Four regions of the
posterior language region. The middle superior temporal gyrus is associated with
phonological perception (blue), the posterior superior temporal gyrus is associated
with phonological sounds of words (purple), the area of the arcuate gyrus and the
ventral temporal cortex is associated with word meanings (green), and the central
portion of the middle temporal gyrus is associated with sentence construction (red).
(Research from Binder, 2017.)
Figure 19.9
At the most basic level of analysis, the dorsal language pathways are
proposed to transform sound information into motor
representation — to convert phonological information into
articulation. The ventral language paths are proposed to convey
phonological information into semantic information. Information
flow in the dorsal pathway is bottom-up, as occurs when we are
asked to repeat nonsense words or phrases. Thus, the temporal
cortex assembles sounds by phonetic structure and passes them
along to the frontal cortex for articulation. No meaning is assigned
to sounds in this pathway. Information flow in the ventral pathway
is proposed to be more top-down, assigning meaning to words and
phrases, as occurs when we assign a specific meaning to a word,
such as “hammer,” that has various meanings.
Phonological and Semantic Regions of Broca’s Area Stimulation of the anterior and
posterior extent of Broca’s area by TMS inhibits semantic and phonological
processing, respectively. (Information from Devlin & Watkins, 2007.)
Speech Zones Mapped by Brain-
Imaging Techniques
Using fMRI to measure brain areas implicated in language, Binder
and his colleagues (1997) reported that sound-processing areas
make up a remarkably large part of the brain. These researchers
presented either tones or meaningful words to 30 right-handed
participants, half of whom were male and half of whom were
female. Tone stimuli consisted of a number of 500- and 750-Hz pure
tones presented in sequence. The participants pressed a button if
they heard two 750-Hz tones in a sequence. Word stimuli were
spoken English nouns designating animals (e.g., turtle). Participants
pushed a button if an animal was both native to the United States
and used by humans. A rest condition consisted of no stimulus
presentations.
Aural Activation Le -hemisphere brain regions, shaded red, and the cerebellum (not
shown), were activated while participants listened to speech, as measured by fMRI.
Participants listened to spoken English nouns designating animals and were required
to decide, in each case, whether the word indicated an animal native to the United
States and used by humans. (Research from Binder et al., 1997.)
Brain Areas Activated by Language Tasks Results obtained with the use of PET to monitor
blood flow were analyzed by using subtraction methods. (Part A: research from Posner &
Raichle, 1983; part B: research from Wagner et al., 2001; part C: research from Martin et al.,
1996; part D: research from Damasio et al., 1996.)
The investigators monitored blood flow using PET and analyzed
their data using a subtraction technique. In the sensory (reading or
listening) tasks, they identified changes from baseline blood flow by
taking the difference between the activities in the two states. In the
output task, they subtracted the sensory activity, and in the
association task, they subtracted the output activity. (Figure 7.16
explains the subtraction technique.)
Many current language models are based on the idea that language
is widely distributed in cortical and other brain structures. Even
single words are widely distributed, which is one way they acquire
their many meanings. We describe two language-network models
that illustrate the distribution of this network in the cortex. Be
aware, however, that whereas computer networks are precise,
proposed language networks are speculative. First, it is difficult to
establish whether single neurons or groups of neurons are the
proper network elements; and second, information usually flows
one way in the linear programming models used for computer
networks, but network flow in the brain is two-way.
Figure 19.14
The Semantic Network Seven cortical regions proposed to form the semantic
cortical network include posterior regions (AG, arcuate gyrus; LVT, lateral ventral
temporal lobe; VT, ventral temporal lobe) that perform perceptual functions, and
anterior regions (dmPF, dorsomedial prefrontal; IFG, inferior frontal gyrus; vmPF,
ventromedial prefrontal cortex; PC, posterior cingulate cortex) that perform action
functions.
As suggested by this summary of the different contributions of the
anterior and posterior cortical regions of the semantic network,
posterior cortical regions are associated with perceptual knowledge of
language, including the interpretation of visual, auditory, and tactile
signals related to the construct of words and their meanings. The
anterior cortical regions are associated with action knowledge, such
as travel to various locations, the use of tools, and social
interactions.
The Response of a Single Voxel to Objects and Actions A continuous semantic space
describes the representation of thousands of object and action categories across the human
brain. The WordNet program arranges nouns (circles) and verbs (squares) in categories. Red
indicates positive responses, and blue indicates negative responses made by the voxel when
viewing movies. The area of each marker indicates response magnitude. The voxel mapped in
this figure is responsive to scenes containing constructed objects.
Figure 19.16
Semantic Maps for Listening and Reading Semantic maps are displayed on the
cortical surface of one participant. The color wheel legend at the center
indicates the associated semantic concepts. Maps occur in the prefrontal cortex
(PVC), medial prefrontal cortex (MPC), lateral prefrontal cortex (LPF), lateral
temporal cortex (LTC), and ventral temporal cortex (VTC) but are less well-
defined in the auditory cortex (AC) and extrastriate cortex (ECT). (Reprinted by
permission of the Journal of Neuroscience, from Deniz, F., Nunez-Elizalde, A.O.,
Huth, A.G., Gallant, J.L. “The Representation of Semantic Information Across
Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus
Modality.” Journal of Neuroscience, 39:7722–7736, 2019, Figure 4. Permission
conveyed through Copyright Clearance Center, Inc.)
Neural Webs for Language Tasks Nodes are symbolized by circles, and edges
(interconnecting axonal pathways) are represented by lines. In this model,
different word-related tasks use different neural webs. (Information from
Salmelin & Kujala, 2006.)
19.4 Language Disorders
Standard language function depends on the complex interaction of
sensory integration and symbolic association, motor skills, learned
syntactical patterns, and verbal memory. Aphasia may refer to a
language disorder apparent in speech, in writing (also called
agraphia), or in reading (also called alexia) produced by injury to
brain areas specialized for these functions. Thus, disturbances of
language due to severe intellectual impairment, to loss of sensory
input (especially vision and hearing), or to paralysis or
incoordination of the musculature of the mouth (called anarthria)
or hand (for writing) are not considered aphasic disturbances.
These disorders may accompany aphasia, however, and they
complicate its study.
Disorders of comprehension
Disorders of production
Poor articulation
Fluent Aphasias
Fluent aphasias are impairments related mostly to language input
or reception. Individuals with these impairments produce fluent
speech but have difficulties either in auditory verbal
comprehension or in repeating words, phrases, or sentences
spoken by others. To a listener who does not speak the language of
a fluent aphasic, it seems as though the patient is speaking easily
and correctly.
Pure Aphasias
The pure (or selective) aphasias include alexia, an inability to read;
agraphia, an inability to write; and word deafness, in which a person
cannot hear or repeat words. These disorders may be restricted to a
specific ability. For example, a person may be able to read but not
write or may be able to write but not read.
19.5 Localization of Lesions in
Aphasia
Students who are just beginning to study the neural bases of
language are intrigued by the simplicity of the Wernicke–
Geschwind model, where Wernicke’s area is associated with speech
comprehension, Broca’s area is associated with speech production,
and the fibers connecting them translate meaning into sound (see
Figure 19.6). As earlier sections explain, however, the neural
organization of language is more complex and requires
consideration of the brain’s many pathways and anatomical regions
related to language.
Middle Cerebral Artery The amount of damage to the cortex due to blockage or
bleeding of the middle cerebral artery (red) can vary widely in the neocortex (A) and
the basal ganglia (B), depending on the location of the blockage or bleeding.
Right-Hemisphere Contributions to
Language
Although it is well established that the le hemisphere of right-
handed people is dominant in language, the right hemisphere also
has language abilities. The best evidence comes from studies of
split-brain patients in whom the linguistic abilities of the right
hemisphere have been studied systematically with the use of
various techniques for lateralizing input to one hemisphere (such
as shown in Figure 11.9).
The results of these studies show that the right hemisphere has
little or no speech but surprisingly good auditory comprehension of
language, including for both objects and actions. There is some
reading ability but little writing ability in the right hemisphere. In
addition, although the right hemisphere can recognize words
(semantic processing), it has little understanding of grammatical
rules and sentence structures (syntactical processing).
Gestural language + +
Prosodic language
Rhythm ++
Inflection + +
Timbre + ++
Melody ++
Semantic language
Word recognition + +
Verbal meaning ++ +
Concepts + +
Visual meaning + ++
Syntactical language
Sequencing ++
Relations ++
Grammar ++
Reprinted from Cortex, Vol. 22, Benson, D. F., Aphasia and lateralization of language, pages
71–86, © 1986, with permission from Elsevier.
19.6 Neuropsychological
Assessment of Aphasia
The neuropsychological assessment of aphasia has changed greatly
with the advent of brain imaging. Comprehensive test batteries
were once used to localize brain lesions and to establish a
standardized, systematic procedure for assessing aphasia, both to
provide clinical descriptions of patients and to facilitate
comparison of patient populations. With the advent of brain
imaging, a brain lesion can be quickly and accurately localized, but
there remains a need for tests that can be administered quickly (in
under 30 minutes) while still providing detailed information. An
additional problem is that although there is no shortage of tests, the
adequacy of all current tests has been called into question (Rohde
et al., 2018). Interestingly, simply measuring timing pauses in
speech has been suggested as a test of language impairment
(Angelopoulou, 2018). Finally, therapeutic approaches are now
more directed toward individual symptoms, such as those listed in
Table 19.2, than to assessment of overall conditions.
Neurosensory center comprehensive examination for aphasia Spreen & Benton, 1969
Wepman–Jones Language Modalities Test for Aphasia Wepman & Jones, 1961
Dual-Route Model Speech from print can follow a number of routes and can be
independent of comprehension or pronunciation. (Information from Coltheart, 2005.)
SUMMARY
19.1 Language consists of methods of social
information sharing
Language allows humans to organize sensory inputs by assigning
tags to information. Tagging allows us to categorize objects and
ultimately concepts and to speak to ourselves about our past and
future. Language also includes the unique motor act of producing
syllables as well as the ability to impose grammatical rules. Both
dramatically increase the functional capacity of language.
Key Terms
agraphia
alexia