The Neuroscience of Speech and Language
The Neuroscience of Speech and Language
ABSTRACT: Speech, language, and music are defining character- associating meaning. Remarkably, the auditory system pre-
istics of humans. Recent advances in neuroimaging have allowed cisely encodes vibrations as small in diameter as an atom and a
researchers to examine brain activity during speech and language thousand times faster than the visual system (Purves, Augustine,
tasks, revealing the complexity of these processes. Research has also
begun to better understand the neurobiological underpinnings of
Fitzpatric et al., 1997). The earliest stage of neural processing
speech and language impairments. Yet, there remains a need to bet- of sound begins in the cochlea, which is a small snail-shaped
organ in the inner ear. In this organ, there are hair cells whose
the superior olivary complex, still within the brainstem. The the auditory stimuli, such as latency, loudness, and frequency
primary function of this area is to determine differences in tim- modulation. From the primary auditory cortex, auditory infor-
ing (i.e., interaural time) and intensity (i.e., interaural inten- mation is segregated into two separate processing streams, the
sity) between the signals from each ear for sound localization “what” (i.e., dorsal stream) and “where” (i.e., ventral stream)
(Knudsen, 1999). From here, auditory signals are transmitted pathways, which originate in different parts of the primary
to the lateral lemniscus, and then to the inferior colliculus, auditory cortex. The “what” stream travels just below the
still in the brainstem. Here, the auditory spatial map is gener- primary auditory cortex to other temporal lope regions and
ated and is aligned to a visual spatial map in another area is involved in sound identification. The “where” stream trav-
of the brainstem, the superior colliculus, for further complex els to the parietal lobe and is involved with sound localiza-
coding of sound localization. From the inferior colliculus, the tion (Lomber & Malhorta, 2008; Rauschecker & Tian, 2000).
auditory pathway continues to the medial geniculate nucleus However, more recent imaging studies have suggested that
of the thalamus, which regulates which sensory information the parietal pathways may contribute to speech perception
reaches the cortex, and then to the primary auditory cortex (Kandel et al., 2013).
(Figure 3) (Cohen & Knudsen, 1999; Kandel et al., 2013). From this point in the auditory pathway, information is less
The primary auditory cortex is located in the temporal lobe. clear on how the basic properties of sound are transformed
This area contains a distinct region that contains the tonotopic and identified as words with meaning. Specific areas of
representation, as well as regions that code other features of the cortex have been identified as being involved with this
The Neuroscience of Speech and Language 109
The amygdala has been implicated in emotion perception of ganglia and cerebellum, may also be involved in the adap-
language (Liebenthal, Silbersweig, & Stern, 2016). Finally, it tation, error correction, and modulation of ongoing speech
is now understood that the arcuate fasciculus interconnects (Kotz, Schwartze, & Schmidt-Kassow, 2009).
larges areas of the sensory cortex with prefrontal and premotor The final command to produce speech is mediated by the
regions, supporting the importance of multiple sensory areas, primary motor cortex. In the primary motor cortex, a map of
including vision, in language processing (Kandel et al., 2013). the human body (i.e., homunculus) is represented with the
(Table 1). Taken together, research now suggests that multi- greatest amount of neural representation for those parts of
ple cortical brain regions are involved in language processing the body that require fine adjustments in movement. Thus,
and involve a much larger network than initially conceived the mouth and tongue encompass a large region of the primary
(Tremblay & Dick, 2016). motor cortex. This area sends the final command to the mus-
cles of the tongue, mouth, and vocal cords, while coordinating
The Neuroscience of Speech Production breath control to produce fluent speech (Kandel et al., 2013).
Table 1
Brain Regions Involved in the Neuroscience of Speech and Language
tongue, vocal folds, and/or diaphragm are affected. There are the presence of involuntary speech. If the knob is turned
multiple types of dysarthria that are caused by various dis- down (i.e., decreased dopamine), hypokinetic (i.e., reduced
eases, from cerebral palsy to stroke to Parkinson’s disease. In movement) symptoms can occur, such as low speech volume,
short, spastic dysarthria is due to damage of the white-matter monotone speech, and the lack of ability to initiate speech.
tract that runs from the motor cortex to the spine. Hyperkinetic Taking this information together with knowledge of how
and hypokinetic dysarthria are the results of damage to the music affects the brain, the music therapist may then consider
basal ganglia. Ataxic dysarthria is due to damage in the cer- the best way to incorporate rhythm, vocal range, and music
ebellum, while flaccid dysarthria results from damage to the preference into interventions for dysarthria. For example, for
neurons that run from the spine to the muscle (Enderby, 2013). hypokinetic dysarthria (i.e., decreased dopamine and reduced
In contrast, aphasia is due to impairments in cortical regions movement), a music therapist may consider using preferred
involved in language processing and speech. There are many music, as this has been shown to increase dopamine produc-
types of aphasia that differentially affect speech and compre- tion in the brain (Menon & Levitin, 2005), and the incorpo-
Table 2
Types of Dysathrias and Aphasias
overcome the impairment in comprehension may be chal- Gentilucci, M., & Dalla Volta, R. (2008). Spoken language and arm gestures are
lenging. Thus, strengthening the network through embedding controlled by the same motor control system. Quarterly Journal of Experimental
Psychology, 61, 944–957.
meaning in music, a less complex auditory signal (Stegemöller
Gerschwind, N. (1970). The organization of language and the brain. Science, 170,
et al., 2008), may aid in the strengthening of the network of 940–944.
brain regions involved in language comprehension. Goodglass, H. (1993). Understanding Aphasia. San Diego, CA: Academic Press.
Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing.
Conclusion Nature Reviews Neuroscience, 8, 393–402.
There are many interventions that have and will be devel- Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J.
oped for the treatment of dysarthria and aphasia. However, (2013). Principles of Neuroscience (Vol. 5). New York: McGraw-Hill.
Kiang, N. Y. S. (1984). Peripheral neural processing of auditory information. In
it is important to understand both the normal and impaired
Section 1: The Nervous System, Volume III. Sensory Processes, Part 2, J. M.
neurophysiology associated with speech and language pro- Brookhart, V. B. Mountcastle, I. Darian-Smith and S. R. Geiger (Eds.), Handbook
cessing to better inform the design and implementation of of Physiology. Bethesda, MD: American Physiological Society.