0% found this document useful (0 votes)
52 views6 pages

The Neuroscience of Speech and Language

Uploaded by

City Folk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views6 pages

The Neuroscience of Speech and Language

Uploaded by

City Folk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

The Neuroscience of Speech and Language

ELIZABETH L. STEGEMÖLLER, PhD Iowa State University

ABSTRACT:  Speech, language, and music are defining character- associating meaning. Remarkably, the auditory system pre-
istics of humans. Recent advances in neuroimaging have allowed cisely encodes vibrations as small in diameter as an atom and a
researchers to examine brain activity during speech and language thousand times faster than the visual system (Purves, Augustine,
tasks, revealing the complexity of these processes. Research has also
begun to better understand the neurobiological underpinnings of
Fitzpatric et al., 1997). The earliest stage of neural processing
speech and language impairments. Yet, there remains a need to bet- of sound begins in the cochlea, which is a small snail-shaped
organ in the inner ear. In this organ, there are hair cells whose

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


ter understand how music therapy impacts these impairments. The
main purpose of this manuscript is to provide an overview of the function is to transform mechanical energy (i.e., sound waves)
neuroscience of speech and language. The use of music therapy in into neural signals. The hair cells are distributed throughout the
the treatment of speech and language impairments, such as aphasia length of the cochlea and are bordered by a stiff tectoral mem-
and dysarthria, will also be touched upon. The neuroscience of audi-
brane on top and a more fluid basilar membrane on the bottom
tory processing, language processing, and speech production will be
provided, followed by a brief overview of the neural underpinnings of (Dallos, 1992). Before reaching the cochlea, the external and
dysarthria and aphasia and implications for music therapy. middle ear collects and amplifies sound energy in the air such
that it is transmitted to the fluid-filled cochlea of the inner ear.
Keywords: neuroscience of speech and language, music and speech,
The pressure waves generated in the inner ear cause the defor-
music therapy
mation of the basilar membrane (Figure 1A).
Since the basilar membrane is attached to the cochlea only
on one end, the deformations are much like the vibration of a
Introduction diving board. Moreover, the basilar membrane is narrower at
The neuroscience of speech and language is complex, the base and wider at the apex. Thus, larger sound waves (lower
involving auditory processing, language processing, and pitches) will preferentially cause deformation toward the apex
speech production. Less than one-half of the human brain is (wide free end) of the basilar membrane, while smaller sound
dedicated to sensory and motor function, and the remaining waves (higher frequencies) will cause deformations toward the
is devoted to complex cognitive behavior, such as language base (narrow connected end) of the basilar membrane. This
processing. Due to the complexity of speech and language, anatomical structure provides the basis for the beginning of the
most of the neuroscience findings have been obtained from auditory map, referred to as tonotopicity, where the coding of
observation of persons with aphasia and postmortem exami- sound is decomposed into frequencies mapped from high to
nation of their brains. However, as neuroimaging technology low (Kiang, 1984; Knudsen & Konishe, 1978) (Figure 2).
advances, researchers are slowly uncovering more informa- Recall that the basilar membrane lies on the bottom of the
tion about the neuroscience of speech and language. To better hair cell. On the top of the hair cell is the stiff tectoral mem-
understand this new and exciting literature, there is a need to brane in which tiny steriocillia (i.e., little hairs) of the hair cells
understand the information upon which research in the neuro- are embedded. Thus, when the basilar membrane is driven up
science of speech and language has been built. This knowledge and down, it causes a shearing motion between the tectoral
will inform clinical practice. Thus, the purpose of this paper is membrane and the steriocillia. When the steriocillia are sub-
to provide an overview of the neuroscience of speech and lan- sequently deflected in one direction, an action potential is
guage processing. To accomplish this purpose, three sections generated (Figure  1B) (Lewis & Hudspeth, 1983). When the
will be reviewed: 1) auditory processing, 2) language process- steriocillia are deflected in the opposite direction, the action
ing, and 3)  speech production. In addition, a brief overview potential is prevented. Thus, the anatomy of the inner ear
of the neural underpinnings of dysarthria and aphasia will be allows not only for specificity in the coding of frequency of
covered, as well as a few implications for music therapy. sound (i.e., pitch), but also precise coding of timing.
Once the action potential has been generated, it is propa-
The Neuroscience of Auditory Processing gated down the auditory nerve (i.e., eighth cranial nerve) to
The understanding of language begins with auditory pro- terminate exclusively in the cochlear nucleus, which is in
cessing. Humans must accurately detect minor differences the brainstem (Purves et al., 1997; Kandel, Schwartz, Jessell,
in the spectral and temporal components of sound before Siegelbaum, & Hudspeth, 2013). The majority of these fibers
are large and fast and carry information for a small, specific
Elizabeth L.  Stegemöller is faculty in the Department of Kinesiology at Iowa range of frequencies and terminate in the cochlea nuclei in
State University. Address correspondence concerning this article to Elizabeth a tonotopic organization, such that the neural map of fre-
L. Stegemöller, PhD, Department of Kinesiology, Iowa State University, Ames, IA
50011. E-mail: [email protected]. quency (Figure  2) is maintained throughout the pathway
© American Music Therapy Association 2017. All rights reserved. For permissions, (Kiang, 1984; Knudsen & Konishe, 1978). A  smaller propor-
please e-mail: [email protected]
tion of fibers are small and slow and function to sharpen the
doi:10.1093/mtp/mix007
Advance Access publication June 15, 2017 timing and frequency information (Kandel et al., 2013). From
Music Therapy Perspectives, 35(2), 2017, 107–112 the cochlear nucleus, the next step in the auditory pathway is
107
108 Music Therapy Perspectives (2017), Vol. 35

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


Figure 1. (A) Anatomy of the inner ear. (B) Anatomy of the hair cells and scala media.Taken from: Stanfield, C. L. (2011). Principles of Human
Physiology. (Vol. 4). New York, NY: Reprinted by permission of Pearson Education, Inc.

Figure 2. Schematic of the tonotopic organization of the basilar membrane.

the superior olivary complex, still within the brainstem. The the auditory stimuli, such as latency, loudness, and frequency
primary function of this area is to determine differences in tim- modulation. From the primary auditory cortex, auditory infor-
ing (i.e., interaural time) and intensity (i.e., interaural inten- mation is segregated into two separate processing streams, the
sity) between the signals from each ear for sound localization “what” (i.e., dorsal stream) and “where” (i.e., ventral stream)
(Knudsen, 1999). From here, auditory signals are transmitted pathways, which originate in different parts of the primary
to the lateral lemniscus, and then to the inferior colliculus, auditory cortex. The “what” stream travels just below the
still in the brainstem. Here, the auditory spatial map is gener- primary auditory cortex to other temporal lope regions and
ated and is aligned to a visual spatial map in another area is involved in sound identification. The “where” stream trav-
of the brainstem, the superior colliculus, for further complex els to the parietal lobe and is involved with sound localiza-
coding of sound localization. From the inferior colliculus, the tion (Lomber & Malhorta, 2008; Rauschecker & Tian, 2000).
auditory pathway continues to the medial geniculate nucleus However, more recent imaging studies have suggested that
of the thalamus, which regulates which sensory information the parietal pathways may contribute to speech perception
reaches the cortex, and then to the primary auditory cortex (Kandel et al., 2013).
(Figure 3) (Cohen & Knudsen, 1999; Kandel et al., 2013). From this point in the auditory pathway, information is less
The primary auditory cortex is located in the temporal lobe. clear on how the basic properties of sound are transformed
This area contains a distinct region that contains the tonotopic and identified as words with meaning. Specific areas of
representation, as well as regions that code other features of the cortex have been identified as being involved with this
The Neuroscience of Speech and Language 109

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


Figure 3. Schematic of the auditory pathway.

process, but the exact neural mechanism remains unknown.


Nonetheless, it is important to understand the basic process
of auditory pathway and function, as impairments in auditory
processing can contribute to deficits in speech and language.
Distinguishing and/or identifying the site of impairment can
provide useful information for the design and implementation
of therapeutic interventions.

The Neuroscience of Language Processing


Pierre Paul Broca was the first to identify the brain areas
involved with language in 1861. From examining the postmor-
tem brains of patients who could not speak, but had no motor
deficits of the tongue, mouth, or vocal cords, he identified
lesions in the left posterior region of the frontal lobe. This area
is now referred to as Broca’s area, and impairments in this area
result in Broca’s aphasia (Broca, 1865; Kandel et  al., 2013). Figure 4. Wernicke–Gerschwind model of language processing.
Then, in 1876, Carl Wernicke described another form of apha-
sia in which patients could form words but did not understand information (i.e., changes in pitch, duration, and loudness) pro-
language. He identified lesions in the left posterior part of the vides semantic meaning or emotional meaning and engages
cortex where the temporal, parietal, and occipital lobes meet, both the right and left hemispheres, depending on the infor-
which is known as Wernicke’s area (Wernicke, 1908; Kandel mation conveyed (Kandel et al., 2013). Research has shown
et al., 2013). From this information, the Wernicke-Gerschwind that in languages in which a change in pitch indicates seman-
model of language processing (Gerschwind, 1970) was pro- tic meaning, language processing is generally lateralized to
posed, stating that the initial steps of language processing the left. However, in languages in which a change in pitch
occur in separate sensory areas (i.e., auditory or visual) and indicates emotional meaning, the right hemisphere is gener-
are then conveyed to the cortical association area specialized ally engaged (Gandour et al., 2000; Wildgruber, Ackermann,
for both auditory and visual processing (i.e., angular gyrus). At Kreifelts, & Ethofer, 2006).
this point, a common neural code for both speech and writ- Research has also revealed the involvement of additional
ing is formed and conveyed to Wernicke’s area, where mean- brain regions in language processing in addition to the areas
ing is associated and transferred to higher brain structures. proposed by the Wernicke–Gerschwind model (Hickoc &
Information is also transformed into acoustic patterns and Poeppel, 2007). To begin with, there are expanded association
conveyed via the arcuate fasciculus (i.e., a white-matter tract) areas of the left frontal, temporal, and parietal regions involved
to Broca’s area, in which the neural code is transformed into with processing concepts and words, including audiovisual
a motor representation (Wernicke, 1908; Gerschwind, 1970; processes (Damasio, Tranel, Grabowski, Hichwa, & Damasio,
Kandel et al., 2013) (Figure 4). 2004; Bernsten & Liebenthal, 2014; Binder, 2015). The left
While the findings of Broca, Wernicke, and Gerschwind insular region has been identified as an additional area for lan-
were important milestones in the effort to understand lan- guage production (Ardila, Bernal, & Rosselli, 2016). Prefrontal
guage processing, advances in neuroscience have revealed and cingulate areas have been implicated in executive con-
additional information. Research has revealed that elements trol, working memory, and attention involved in language
of language processing are lateralized. For example, prosodic processing (Chertkow & Murtha, 1997; Kandel et al., 2013).
110 Music Therapy Perspectives (2017), Vol. 35

The amygdala has been implicated in emotion perception of ganglia and cerebellum, may also be involved in the adap-
language (Liebenthal, Silbersweig, & Stern, 2016). Finally, it tation, error correction, and modulation of ongoing speech
is now understood that the arcuate fasciculus interconnects (Kotz, Schwartze, & Schmidt-Kassow, 2009).
larges areas of the sensory cortex with prefrontal and premotor The final command to produce speech is mediated by the
regions, supporting the importance of multiple sensory areas, primary motor cortex. In the primary motor cortex, a map of
including vision, in language processing (Kandel et al., 2013). the human body (i.e., homunculus) is represented with the
(Table  1). Taken together, research now suggests that multi- greatest amount of neural representation for those parts of
ple cortical brain regions are involved in language processing the body that require fine adjustments in movement. Thus,
and involve a much larger network than initially conceived the mouth and tongue encompass a large region of the primary
(Tremblay & Dick, 2016). motor cortex. This area sends the final command to the mus-
cles of the tongue, mouth, and vocal cords, while coordinating
The Neuroscience of Speech Production breath control to produce fluent speech (Kandel et al., 2013).

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


Once language processing is complete, speech must be pro- When thinking of language processing as a whole, it is
duced. As Broca noted, the ability to move the tongue, mouth, interesting to note that more specific information is known
and vocal cords was still intact in those patients with Broca’s about auditory processing and speech production, while lan-
aphasia. This would suggest that additional brain regions are guage processing is more complex and involves multiple brain
responsible for the execution of speech. However, before the regions and networks. Still, all three levels, auditory process-
execution of speech occurs, many movement elements, such ing, language processing, and speech production, are more
as coordination, amplitude, and timing, must be planned. For complicated than the general overview provided here, but it is
example, the correct movement sequencing of the tongue, an exciting time with the advancement of technology that will
mouth, and other areas must be planned in order to produce allow researchers to continue to more deeply examine and
coherent speech. The supplementary motor area has been understand these processes. An initial understanding of these
implicated in movement sequencing (Tanji & Shima, 1996; processes is the needed building block for continued research
Price, 2010; Cona & Semenza, 2017). The planning of when and application to clinical practice.
to speak in relation to external cues (i.e., someone else talk-
ing) must also be formulated. The premotor region has been Implications for Music Therapy
implicated in the planning of movement in respect to external A basic understanding of the underlying neurophysiology of
cues, as well as in the involvement of gestures involved with speech and language can greatly increase a clinician’s ability
speech production (Deiber, Honda, Ibaῆez, Sadato, & Hallet, to understand various impairments in speech and language.
1999; Gentilucci & Dalla Volta, 2008). Finally, the cingulate Music therapy has been used to treat various speech and
motor area is involved with movement planning based upon language processing disorders, most notably dysarthria and
the emotional processing of movement, such as facial expres- aphasia.
sion, and may convey emotion in speech (Müri, 2016). While Dysarthria is an impairment in motor speech production.
these cortical regions play an extensive role in the planning of Typically, dysarthria arises due to impairments in the motor
speech production, other motor regions, including the basal system in which neurons controlling the movement of the lips,

Table 1
Brain Regions Involved in the Neuroscience of Speech and Language

Domain Brain region Function


Sensory processing Auditory cortex Auditory processing
Sensory processing Visual cortex Visual processing
Language processing Angular gyrus Association for auditory and visual information
Language processing Wernicke’s area Language comprehension
Language processing Broca’s area Language production
Language processing Arcuate fasciculus Convey language comprehension to production
Language processing Left frontal area Concepts, words, and audiovisual processes
Language processing Left temporal area Concepts, words, and audiovisual processes
Language processing Left parietal area Concepts, words, and audiovisual processes
Language processing Left insular region Language production
Language processing Prefrontal area Executive control, working memory, attention
Language processing Cingulate area Executive control, working memory, attention
Language processing Amygdala Emotion perception
Speech production Supplementary motor area Movement sequencing
Speech production Premotor region Movement planning in relation to external cue
Speech production Cingulate motor area Emotional processing of movement
Speech production Basal ganglia Adaptation, error correction, modulation
Speech production Cerebellum Adaptation, error correction, modulation
Speech production Primary motor cortex Final command to produce speech
The Neuroscience of Speech and Language 111

tongue, vocal folds, and/or diaphragm are affected. There are the presence of involuntary speech. If the knob is turned
multiple types of dysarthria that are caused by various dis- down (i.e., decreased dopamine), hypokinetic (i.e., reduced
eases, from cerebral palsy to stroke to Parkinson’s disease. In movement) symptoms can occur, such as low speech volume,
short, spastic dysarthria is due to damage of the white-matter monotone speech, and the lack of ability to initiate speech.
tract that runs from the motor cortex to the spine. Hyperkinetic Taking this information together with knowledge of how
and hypokinetic dysarthria are the results of damage to the music affects the brain, the music therapist may then consider
basal ganglia. Ataxic dysarthria is due to damage in the cer- the best way to incorporate rhythm, vocal range, and music
ebellum, while flaccid dysarthria results from damage to the preference into interventions for dysarthria. For example, for
neurons that run from the spine to the muscle (Enderby, 2013). hypokinetic dysarthria (i.e., decreased dopamine and reduced
In contrast, aphasia is due to impairments in cortical regions movement), a music therapist may consider using preferred
involved in language processing and speech. There are many music, as this has been shown to increase dopamine produc-
types of aphasia that differentially affect speech and compre- tion in the brain (Menon & Levitin, 2005), and the incorpo-

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


hension. Broca’s aphasia is the result of damage to the left ration of a large vocal range. The music therapist may also
posterior frontal cortex and underlying structures, and speech consider using a strong, steady beat, such that speech produc-
production is mainly impaired. Wernicke’s aphasia is the result tion is externally cued, which uses a different pathway than
of damage to the left posterior superior and middle temporal the basal ganglia. In contrast, for hyperkinetic dysarthria (i.e.,
cortex, in which comprehension is mainly impaired. Other increased dopamine and increased movement), a music thera-
aphasias include conduction aphasia (impaired repetition), pist may consider using music that promotes relaxation of the
global aphasia (both impaired speech and comprehension), face and vocal areas by recruiting different neurotransmitters
transcortical motor aphasia (impaired speech), and transcor- than dopamine, such as serotonin. The music therapist may
tical sensory aphasia (impaired comprehension) (Goodglass, also consider using an external cue once every four to eight
1993; Kandel et al., 2013). While the many types of dysarthria beats, such that speech production is inhibited until the cue to
and aphasia can be complex, understanding the differences in speak is presented.
the neuroanatomy and physiology associated with these vary- Similarly, the neurophysiology associated with the differ-
ing diagnoses can inform treatment strategies (Table 2). ing types of aphasia can also be taken into account when
For example, the basal ganglia are involved in motor con- designing music therapy interventions. For example, Broca’s
trol and learning. The normal function of the basal ganglia aphasia results in impaired speech production. However,
includes regulating the force of movement, regulating the tim- other regions in the brain, including the left insular region and
ing of movement, and setting the postural adjustments prior regions on the right side of the brain, have also been impli-
to movement. In addition, the basal ganglia is involved in cated in speech production. In this case, perhaps the under-
the initiation of internal movements (Kandel et al., 2013). To standing of neuroplasticity would aid in the development of
extend these functions to speech, the basal ganglia is involved music therapy interventions. The brain is always reorganiz-
with determining vocal loudness, timing and sequencing of ing, especially after injury. Thus, using the unique qualities
movement patterns of the tongue and lips, coordinating breath of music that enhance dopamine production, synchronize
control and posture with speech production, and the initiation neural firing, and encompass the entire brain (Stegemöller,
of internally generated speech (i.e., no cue). Thus, damage to 2014), interventions can be developed to target activity in the
this area of the brain can result in changes in these functions. remaining brain regions that contribute to speech production.
It may be useful to think of the basal ganglia as a knob These pathways can be strengthened, with the eventual goal
that is modulated by dopamine. If turned up (i.e., increased of becoming the primary pathways for speech production. In
dopamine), certain hyperkinetic (i.e., too much movement) Wernicke’s aphasia, comprehension is impaired, and impacts
symptoms can occur, such as a harsh and strained voice and a network of brain regions. Engaging other brain regions to

Table 2
Types of Dysathrias and Aphasias

Disorder Brain region Symptoms


Spastic dysarthria Corticospinal tract Impaired/lack of movement/speech
Hyperkinetic dysarthria Basal ganglia Harsh and strained voice, involuntary speech
Hypokinetic dysarthria Basal ganglia Low speech volume, monotone speech, inability to initiate
speech
Ataxic dysarthria Cerebellum Uncoordinated speech
Flaccid dysarthria Lower motor neurons Impaired/lack of movement/speech
Broca’s aphasia Left posterior frontal cortex Impaired speech production
Wernicke’s aphasia Left posterior superior and middle Impaired language comprehension
temporal cortex
Conduction aphasia Parietal lobe, arcuate fasciculus Impaired to repeat words, poor oral reading
Global aphasia Large portion of the left hemisphere Inability to comprehend and form speech
Transcortical motor aphasia Left anterior superior frontal lobe Impaired speech production
Transcortical sensory aphasia Inferior left temporal lobe Impaired language comprehension
112 Music Therapy Perspectives (2017), Vol. 35

overcome the impairment in comprehension may be chal- Gentilucci, M., & Dalla Volta, R. (2008). Spoken language and arm gestures are
lenging. Thus, strengthening the network through embedding controlled by the same motor control system. Quarterly Journal of Experimental
Psychology, 61, 944–957.
meaning in music, a less complex auditory signal (Stegemöller
Gerschwind, N. (1970). The organization of language and the brain. Science, 170,
et al., 2008), may aid in the strengthening of the network of 940–944.
brain regions involved in language comprehension. Goodglass, H. (1993). Understanding Aphasia. San Diego, CA: Academic Press.
Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing.
Conclusion Nature Reviews Neuroscience, 8, 393–402.
There are many interventions that have and will be devel- Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J.
oped for the treatment of dysarthria and aphasia. However, (2013). Principles of Neuroscience (Vol. 5). New York: McGraw-Hill.
Kiang, N. Y.  S. (1984). Peripheral neural processing of auditory information. In
it is important to understand both the normal and impaired
Section 1: The Nervous System, Volume III. Sensory Processes, Part 2, J. M.
neurophysiology associated with speech and language pro- Brookhart, V. B. Mountcastle, I. Darian-Smith and S. R. Geiger (Eds.), Handbook
cessing to better inform the design and implementation of of Physiology. Bethesda, MD: American Physiological Society.

Downloaded from https://academic.oup.com/mtp/article/35/2/107/3868404 by EIFL user on 05 November 2020


music therapy interventions. Moreover, understanding of this Knudsen, E. I. (1999). Mechanisms of experience-dependent plasticity in the audi-
information will also allow for enhanced communication tory localization pathway of the barn owl. Journal of Comparative Physiology
with other medical professionals. This paper provides just A, 185, 305–321.
Knudsen, E. I., & Konishe, M. (1978). A neural map of auditory space in the own.
the initial overview of the neural processes associated with
Science, 200, 795–797.
speech and language processing to aid as a starting point for Kotz, S. A., Schwartze, M., & Schmidt-Kassow, M. (2009). Non-motor basal ganglia
deeper exploration and understanding of how the neurosci- functions: a review and proposal for a model of sensory predictability in audi-
ence of speech and language can enhance music therapy tory language perception. Cortex, 45, 982–990.
practice. Lewis, R. S., & Hudspeth, A. J. (1983). Voltage and ion-dependent conductances in
solitary vertebrate hair cells. Nature, 304, 538–541.
Liebenthal, E., Silbersweig, D. A., & Stern, E. (2016). The language, tone, and pros-
ody of emotions: neural substrates and dynamics of spoken-word emotion per-
References
ception. Frontiers in Neuroscience, 10, 506.
Ardila, A., Bernai, B., & Rosselil, M. (2016). How localized are language brain areas? Lomber, S. G., & Malhorta, S. (2008). Double dissociation of “what” and “where”
A review of Brodmann areas involvement in oral language. Archives of Clinical processing in auditory cortex. Nature Neuroscience, 11, 609–616.
Neuropsychology, 31(1), 112–122. Menon, V., & Levitin, D. J. (2005). The rewards of music listening: response and
Bernsten, L. E., & Liebenthal, E. (2014). Neural pathways for visual speech percep- physiolical connectivity of the mesolimbic system. Neuroimage, 28, 175–184.
tion. Frontiers in Neuroscience, 8, 386. Müri, R. M. (2016). Cortical control of facial expression. Journal of Comparative
Binder, J. R. (2015). The Wernicke area: modern evidence and a reinterpretation. Neurology, 524, 1578–1785.
Neurology, 85(24), 2170–2175. Price, C. J. (2010). The anatomy of language: a review of 100 fMRI studies published
Broca, P. (1865). Sur le siége de la faculté du langage articulé. Bulletins et mémoires in 2009. Annals of the New York Academy of Sciences, 1191, 62–88.
de la Société d’anthropologie de Paris, 6, 377–393. Purves, D., Augustine, G. J., Fitzpatric, D., Katz, L. C., Lamantia, A., & McNamara, J.
Chertkow, H., & Murtha, S. (1997). PET activation and language. Clinical O. (1997). Neuroscience. Sunderland, MA: Sinauer Associates.
Neuroscience, 4(2), 78–86. Rauschecker, J. P., & Tian, B. (2000). Mechanisms and streams for processing of
Cohen, Y. E., & Knudsen, E. I. (1999). Maps versus clusters: different representations of “what” and “where” in auditory cortex. Proceedings of the National Academy
auditory space in the midbrain and forebrain. Trends in Neurosciences, 22, 97–142. of Sciences USA, 97, 11800–11806.
Cona, G., & Semenza, C. (2017). Supplementary motor area as key structure for Stegemöller, E. L. (2014). Exploring a neuroplasticity model of music therapy. Journal
domain-general sequence processing: a unified account. Neuroscience & of Music Therapy, 51, 211–227.
Biobehavioral Reviews, 72, 28–42. Stegemöller, E. L., Skoe, E., Nicol, T. Warrier, C. M., & Kraus, N. (2008). Music
Dallos, P. (1992). The active cochlea. Journal of Neuroscience, 12, 4575–4585. training and vocal production of speech and song. Music Perception, 25,
Damasio, H., Tranel, D., Grabowski, T. J., Hichwa, R., & Damasio, A. R. (2004) 419–428.
Neural systems behind word and concept retrieval. Cognition, 92, 179–229. Tanji, J., & Shima, K. (1996). Supplementary motor cortex in organization of move-
Deiber, M. P., Honda, M., Ibaῆez, V., Sadato, N., & Hallet, M. (1999). Mesial motor ment. European Neurology, 36 Supplement 1, 13–19.
areas in self-initiated versus externally triggered movements examined with fMRI: Tremblay, P., & Dick, A. S. (2016). Broca and Wernicke are dead, or moving past
effect of movement type and rate. Journal of Neurophysiology, 81, 3065–3077. the classic model of language neurobiology. Brain and Language, 162, 60–71.
Enderby, P. (2013). Disorders of communication: dysarthria. Handbook of Clinical Wernicke, C. (1908). The symptom-complex of aphasia. In A. Church (Ed.), Diseases
Neurology, 110, 273–281. of the Nervous System. New York: Appleton.
Gandour, J., Wong, D., Hsieh, L., Weinzapfel, B., Van Lancker, D., & Hutchins, G. Wildgruber, D., Ackermann, H., Kreifelts, B., & Ethofer, T. (2006). Cerebral pro-
D. (2000). A crosslinguistic PET study of tone perception. Journal of Cognitive cessing of linguistic and emotional prosody: fMRI studies. Progress in Brain
Neuroscience, 12, 207–222. Research, 156, 248–68.

You might also like