Music and language share a commonality as complex auditory stimuli as well as meaningful outputs of the human mind. It has been shown through various neuroimaging studies that many of the brain structures that serve to process musicality are involved in the processing of language and vice versa. This tight entwinement of processing allows for functional overlaps due to plasticity dependent learning in perception and production. Salient populations that exhibit this functional overlap can be found in the interaction between fluency in tonal languages and musical training vs atonal languages and musical naiveté. This highlights the practical applications of the functional overlap for second language acquisition. Additionally, this connectivity may allow for therapeutic applications in those with language impairments such as dyslexia and autism.

Structural and Functional Overlap of Music and Language Processing

Musical and linguistic processing and their commonalities. Adapted from Brown et al. (2006)

Neuroimaging studies performed by several researchers have indicated a possible common pathway for music and language processing. In many of these studies, the main areas of overlap seem to be the frontal and temporal regions, the basal ganglia, and areas involved in motor control.[1] Activation of motor regions such as the SMA, premotor cortex and cerebellum during speech and music perception fall in line with the motor theory of speech perception.[2] It should be noted that the effect of overlap seems to only be present in syntax and tonal aspects of music and language rather than the semantic meaning.[1] [3]
In a recent study by Schon et al., examination of the brain using fMRI during same/different tasks revealed areas of common activation in different conditions.[3] The conditions were the control (noise); speech, involving three atonal words; vocalization, involving notes sung using one syllable; and song, involving words and melodies in pairs. All three conditions showed enhanced activation of the middle and superior temporal gyri compared to the control. In addition, greater activation of the inferior frontal gyrus was found in the song and speech conditions relative to the control.

Temporal Overlap of Music and Language

In addition to studies mapping the spatial overlap, EEG studies have been performed to examine the patterns of oscillating neuronal firing involved in musical and linguistic processing. Delta and Theta rhythms in the anterior frontal and parietal regions have been shown to be altered during syntax irregularities in both music and language.[4] The oscillatory pathways behind syntactic processing have been implied to overlap in the anterior frontal and parietal areas of the brain, based on signal interference of music processing on language processing.[4] When chord and language syntax error occured together, the delta and theta rhythm amplitudes decreased noticeably. This is in contrast to isolated syntax errors, with only chord or language involved, where delta and theta amplitudes were found to increase in response to the error.[4] Musical syntax processing seems to be further upstream of this pathway, as some aspects of its event related potentials (ERP) are found to be unaffected by the presence of linguistic syntax error.[4]

Lateralization and localization of activation

Currently it is believed that many of the processes involved in speech and music production are not compartmentalized to discrete domains; rather, they reflect gradients of activity. Speech and music may find more activation in the left and right hemispheres respectively; however both hemispheres contribute as illustrated by the greater activation bilaterally relative to controls (white noise).[1] [3]
Domains of higher processing include Brodmann's Area 47 (BA 47), involved in the processing of the complex temporal structure or rhythm of speech and music.[5] Broca's area, known to be involved in the perception of language is also activated in discrimination of melodies which may reflect a deeper function in processing; creation of a template and comparing new stimuli to precomposed templates.[3] The planum polare (BA 38) may be one area that is highly specific to music processing, as it is generally not found to be activated in tests of language processing.[3]

Musical Training and effects on Language Perception

Given that the processing centers for music and language do overlap, this presents opportunities in tonal language acquisition. Experience with musicality or tonal languages may have crossover benefits, facilitating enhanced learning of both. As stated before, the overlap in processing is not found in semantic understanding of words therfore the benefits of musical or linguistic training will be focused on the ability to discriminate tones, rhythm and syntax although these aspects may contribute to word comprehension

Musical training and Tonal Discrimination

Musical training has been shown to have cross modal effects on the perception of tones in foreign tonal languages, increasing the ability to discriminate different phonemes. Musicians are able to more confidently characterize each tone due to their musical training; this confidence may play a role in their aptitude at discrimination tasks, and may be applied to learning a tonal language.[6] Strengthening this view is the observation that musicians detect incorrect pitch changes in unfamiliar languages better than non-musicians.[7] These individuals seem to be able to pick up the small changes in pitch flow, or prosody when experimenters altered the pitch of words at the end of a sentence.[7] In addition to the higher order processing of the cortex, music and language cues overlap early in the auditory pathway. Using Frequency-Following Responses (FFR), a study by Bidelman et al. has shown that the brainstem is involved in the coding of pitch in both speech and music.[8] By measuring the frequency of firing neurons in the rostral brainstem, they demonstrated a clear difference in the ability to discriminate between different pitches in the three groups examined. English speaking musicians and mandarin speakers were found to be much more sensitive to changes in pitch relative to english non-musicians. Musicians in turn, were found to be more sensitive then mandarin speakers. This is thought to be due to experienced-based plasticity; as they hone their fluency, tonal language speakers also pare the range of pitches to which they are sensitive, leaving only the most linguistically relevant.[8] Musicians, on the other hand, are exposed to and regularly practice with a highly organized scale of pitches, resulting in their increased range of sensitivity.[8]

Absolute Pitch in Tonal Languages

After noting the effect of musical training on language acquisition, it is relevant to address the effect of tonal languages on musical abilities. Absolute pitch, is the ability to recognize and label a tone without reference.[9] This ability has been found in North American music conservatories to be more prevalent in tonal language speaking musicians, with fluency in the language being the limiting variable.[9] This is supported in studies of musicians born and raised in tonal language speaking societies, which have a much higher occurrence of absolute pitch in their population.[10] Studies of this interaction posit that the process of acquiring absolute pitch occurs over the critical language period and is similar to tonal language acquisition.[9]

Prosody and Musical Training

In addition to the tonal aspects of language, perhaps more relevant to North Americans are the prosodic elements, or the rhythm, stress and flow of a sentence.[11] Prosody has been implied to be linguistically analogous to the rhythm and meter of musical compositions. Studies have shown that those with musical training have a greater ability to process the length of syllables within a word, possibly enhancing the speed at which meaning is extracting from a word.[12] Their dependency on this temporal aspect of syllables leads to slower response times in word comprehension tasks when given a word with subtle incongruous alterations in syllable length.[11] [12] This effect seems to be automatic in musicians, as they have been shown to detect incongruity in the meter of words even when attention is placed on a different task.[11] The automaticity may reflect changes in attention sensitivity and integration into auditory pathways.[11] [12] In terms of second language acquisition, musicians seem to be able to quickly adapt to the temporal structure of sentences, including vowel length, stop consonant duration and meter in unfamiliar languages;[13] however, there seems to be a preference for phonemes that are more similar to the native language, with completely unfamiliar syllable formants requiring more study in order to uncover the benefits of musical training.[13] Although no direct neural effects were found on semantic processing, behavioural observations revealed a lower error rate in the detection of semantic incongruencies in those with musical training.[11] This is thought to be due to the previously stated effects on prosodic processing and attention.

Therapeutic applications

Research on the functional effects of music on the abnormal brain have led to the creation of music based therapies. Intuitively, these therapies may use music to affect emotion thereby treating some emotional disorders; however it has been demonstrated that music based therapies are relevant in the treatment of specific language impairments.

Applications in Dyslexia

Utilizing the literature on musical and linguistic overlap within the brain, studies have shown that some speech and reading disorders may be treated by musical intervention. In Dyslexia, studies have suggested that an impairment in the speed of auditory processing may play a role in the effects on reading comprehension.[14] There may be evidence for a biological basis for optimal rhythms for perceptual processing of auditory input which may be inhibited in dyslexia.[15] This impairment affects the prosodic elements of stress and rhythm in language, inhibiting a child's ability to comprehend the meaning of a sentence;[15] [16] this is in contrast to individual word decoding which was found to be relatively unaffected in dyslexics.[16] Deficits in the ability to perceive the temporal aspects of music, rhythm and meter, have been shown to be reliable predictors of poor detection of prosodic elements in language and poor language skills in general;[15] this is in line with studies showing an early shared auditory pathway for music.[8] Musical interventions have been shown to increase reading comprehension in dyslexic children, possibly through crossover effects on temporal processing of auditory stimuli that are found in both music and language.[15] Another possible cause for this improvement may be through enhanced attentional skills, which are trained simulataneously in musicians.[16] More studies will be needed to address the ambiguity of the cause of this crossover effect.
Possible mechanism of musical benefits in literacy. Adapted from Overy K. (2003)

Applications in Autism

Illustration of the Auditory-Motor Mapping Training. Adapted from Wan et al. (2011)

Speech impairment in autism is thought to involve multiple sites of developmental dysfunction that include interference with the pathways between language and motor control.[17] These motor control pathways, as stated earlier, play a role in the motor theory of speech,[2] facilitated by mirror neurons which may allow mimicry of spoken language in young children.[2] [18] Based on existing literature showing the connectivity of networks used in the processing of musical and linguistic phenomena, with evidence of the integration of motor networks, Wan et al. have constructed an intervention for autistic children with speech impairments.[18] The novel treatment involves synthesis of this trimodal network, incorporating musical tones, short words or phrases, and an action (in this case tapping a drum). By activating the three networks together, connectivity between them is strengthened and dysfunctional pathways may be bypassed. Preliminary findings have shown significant and long lasting improvements in speech production; since these were non-verbal autistic children they did not actually produce complete words but were instead trained to produce individual syllables.[18] Improvements carried over into untrained speech sounds, illustrating the actual learning of language rather than regurgitation, and lasted for eight weeks following the termination of sessions.[18] This training could be extended and built upon to combine the syllables into simple words in a future study.

See Also

Music and the Developing Brain
Music Therapy
Musical Disorders
Music and Memory
Music and Emotion


  1. ^ Brown S, Martinez MJ, Parsons LM. (2006) Music and language side by side in the brain: a PET study of the generation of melodies and sentences. European Journal of Neuroscience. 23: 2791-2803
  2. ^ Galantucci B, Fowler CA, Turvey MT. (2006) The motor theory of speech perception reviewed. Psychonomic Bulletin & Review. 13(3): 361-377
  3. ^ Schon D, Gordon R, Campagne A, Magne C, Astesano C, Anton JL, Besson M. (2010) Similar cerebral networks in language, music and song perception. Neuroimage. 51: 450-461
  4. ^ Carrus E, Koelsch S, Bhattacharya J. (2011) Shadows of music–language interaction on low frequency brain oscillatory patterns. Brain and Language. 119: 50-57
  5. ^ Vuust P, Wallentin M, Mouridsen K, Ostergaard L, Roepstorff A. (2011) Tapping polyrhythms in music activates language areas. Neuroscience Letters. 494: 211-216
  6. ^
    Marie C, Delogu F, Lampis G, Belardinelli MO, Besson M. (2011) Influence of Musical Expertise on Segmental and Tonal Processing in Mandarin Chinese. Journal of Cognitive Neuroscience. 23(10): 2701-2715
  7. ^

    Marques C, Moreno S, Castro SL, Besson M. (2007) Musicians Detect Pitch Violation in a Foreign Language Better Than Nonmusicians: Behavioral and Electrophysiological Evidence. Journal of Cognitive Neuroscience. 19(9): 1453-1463
  8. ^
    Bidelman GM, Gandour JT, Krishnan A. (2011) Musicians and tone-language speakers share enhanced brainstem encoding but not perceptual benefits for musical pitch. Brain and Cognition. 77: 1-10
  9. ^
    Deutch D, Dooley K, Henthorn T, Head B. (2009) Absolute pitch among students in an American music conservatory: Association with tone language fluency. The Journal of the Acoustical Society of America. 125(4): 2398-2403
  10. ^ Lee CY, Lee YF, Shr CL. (2011) Perception of musical and lexical tones by Taiwanese-speaking musicians. The Journal of the Acoustical Society of America. 130(1): 526-535
  11. ^
    Marie C, Magne C, Besson M. (2011) Musicians and the Metric Structure of Words. Journal of Cognitive Neuroscience. 23(2): 294-305
  12. ^ Chobert J, Marie C, Francois C, Schon D, Besson M. (2011) Enhanced Passive and Active Processing of Syllables in Musician Children. Journal of cognitive neuroscience. 23(12): 3874-3887
  13. ^
    Sadakata M, Sekiyama K. (2011) Enhanced perception of various linguistic features by musicians: A cross-linguistic study. Acta Psychologica. 138: 1-10
  14. ^
    Overy K. (2003) Dyslexia and music. From timing deficits to musical intervention. Annalsof the New York Academy of Sciences. 999(1): 497-505
  15. ^ Huss M, Verney JP, Fosker T, Mead N, Goswami U. (2011) Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology. Cortex. 47: 674-689
  16. ^
    Corrigall KA, Trainor LJ. (2011) Associations Between Length of Music Training and Reading Skills in Children. Music Perception: An Interdisciplinary Journal. 29(2): 147-155
  17. ^
    Stefanatos GA, Baron IS. (2011) The Ontogenesis of Language Impairment in Autism: A Neuropsychological Perspective. Neuropsychology Review. 21(3): 252-270
  18. ^
    Wan CY, Bazen L, Baars R, Libenson A, Zipse L, Zuk J, Norton A, Schlaug G. (2011) Auditory-Motor Mapping Training as an Intervention to Facilitate Speech Output in Non-Verbal Children with Autism: A Proof of Concept Study. PLoS ONE. 6(9): e25505. doi:10.1371/journal.pone.0025505