Brain Studies of Language Processing In Deaf & Hearing People
Petitto and colleagues at the University of Toronto, Dartmouth College Cognitive Neuroscience Center, and at the Montreal Neurological Institute (Neuropsychology/Cognitive Neuroscience Unit and McConnell Brain Imaging Centre) conducted brain-imaging studies (PET, MRI, fMRI, fNIRS) of profoundly deaf persons processing highly specific aspects of natural signed languages as compared with hearing people processing the identical highly specific aspects of spoken language. For 100 years, neuroanatomy textbooks have identified the brain’s left hemisphere as being the primary site of spoken language processing.
Because lesions to tissue in the left hemisphere result in disorders of speech and language, the left hemisphere has been presumed to be fundamentally dedicated to processing sound and speech. In Petitto and colleagues’ studies, they asked what is the neural basis of the cerebral specialization for language? Are the brain sites involved in the neural processing of language uniquely specialized for perceiving and producing sound and speech? Or, do they also involve tissue dedicated to processing specific aspects of the patterning of natural language (such as, for example, the rhythmically-alternating, maximally contrasting patterns of units found at the heart of natural language Phonology)? Petitto’s team was particularly interested in understanding the neural basis by which highly specific language structures are processed at highly specific brain sites and examined this question using both spoken and signed languages. They sought to test whether the brain’s language tissue is dedicated to sound, per se, or to specific patterns unique to the structure of natural language.
Scanning the Brains of Profoundly Deaf Signers and Hearing Speakers
Petitto scanned the brains of profoundly deaf people who have used signed language all of their life, and hearing people who have used spoken language all of their life (and who have no knowledge of signed language) while performing highly specific language tasks.She also studied deaf individuals across two entirely different cultural and linguistic communities: American Sign Language (ASL)–the signed language used in the United States and parts of Canada–and Langue des Signes Québécoise (LSQ)–the signed language used in Québec and other parts of French Canada–to obtain cross-linguistic replication of the research findings. Petitto selected parts of language to study that are well understood for spoken language–both in terms of their function and in terms of where in the brain they are processed in order to: (1) pinpoint where specific parts of natural signed language are processed (2) ask why this particular brain site may control this particular language function:
Are these brain areas where sounds are processed, or are these brain areas doing something else (for example, are these the brain areas where particular types of linguistic patterns are processed regardless of the modality)?
An Hypothesis Challenged
If signed languages are processed in the same places as spoken languages, and for the same parts of language, this would challenge the hypothesis that the said brain tissue is exclusively dedicated to sound processing, and it would suggest that the site may be doing “something else” (see 2000 Petitto, Zatorre, et al. PNAS paper). Petitto’s research findings revealed surprising similarities in the cerebral organization of signed and spoken language while people processed highly specific parts of natural language. In particular, Phonological units in signed language were processed in identical classic phonological sound processing tissue as observed in hearing-speaking adults, called the Planum Temporale in the Superior Temporal Gyrus. There were also intriguing differences. Research of this kind encourages us to reconsider our ideas about the very nature of language and the neural principles that guide its organization in the human brain (Petitto_PNAS_Supplement.pdf).