How do babies acquire language? What do babies know when they start to learn language? What are babies born with to help them discover the core structure of their native language?
Petitto has sought to find the answers to these questions through intensive studies of hearing babies acquiring spoken languages (English or French) and deaf babies acquiring signed languages (American Sign Language, ASL, or Langue des Signes Québécoise, LSQ), from the ages of birth through 48 months. A prevailing view about the biological foundations of language has been that very early language acquisition is tied to speech.Universal regularities in the timing and structure of infants’ vocal babbling and first words were taken as evidence that the brain must be attuned to perceiving and producing spoken language, per se, in early life. A frequent answer to the question “how does early human language acquisition begin?” was that it was the result of the development of the neuroanatomical and neurophysiological mechanisms involved in the perception and the production of speech.
Put another way, the view of human biology at work is that evolution has rendered the human brain neurologically “hardwired” for speech.The most striking finding to emerge from Petitto’s studies is that speech, per se, is not critical to the human language acquisition process. Irrespective of whether an infant is exposed to spoken or signed languages, both are acquired on an identical maturational time course. Further, hearing infants acquiring spoken languages and deaf infants acquiring signed languages exhibit the same linguistic, semantic, and conceptual complexity, stage for stage.
If the assumption that sound and speech are critical to normal language acquisition how then can we account for Petitto’s persistent sign language acquisition findings in young deaf and hearing children being exposed to natural signed languages from birth? Petitto reasoned that in order for signed and spoken languages to be acquired in the same manner, human infants at birth may not be sensitive to sound or speech, per se. Instead, infants may be sensitive to what is encoded within this modality. She proposed that humans are born with a sensitivity to particular rhythmic-temporal patterning of approximately 1.5 Hertz, in maximally-alternating contrast, about the size of a syllable – rhythmic-temporally-oscillating bursts that are unique to aspects of natural language structure (e.g., see Petitto’s publications, especially Petitto & Marentette, 1991; Petitto et al., 2002; Petitto, 2005).A sensitivity to these specific physical dimensions – that is, rhythmic-temporal units about 1.5 Hertz in “sing-song” prosodic patterning – are roughly equivalent to the bite-sized, maximally-contrasting syllable segments in language, and both levels of language organization are found in spoken and signed languages. If the input language contains these specific patterns, infants will then attempt to produce them — regardless of whether they encounter these patterns on the hands or on the tongue.
One novel implication here is that language modality, be it spoken or signed, is highly plastic and may be neurologically set after birth. Note that this brain sensitivity, which Petitto has discovered to be processed in the brain’s Superior Temporal Gyrus (STG) in very young infants (and beyond), renders infants with the units over which they can tacitly perform important distributional (“statistical”) analyses, which, in turn, enable them to derive core knowledge of the patterning at the heart of natural language phonology, syntax, and phonotactic organization. Another benefit of this brain sensitivity is this: in addition to all else, it permits the child to solve the “problem of reference” (that is, to “discover” the discrete unit in the continuous linguistic stream, so that he or she may learn its meanings).Since Petitto began her study of language acquisition and the mechanisms that support it in the brains of young children, it has become accepted that babies are born with a propensity to acquire language. Whether the language comes as speech or sign language, it does not appear to matter to the brain. As long as the language input has the above crucial properties, human babies will attempt to acquire it.
Timing Milestones in Early Human Language Acquisition: Summary of Findings
Deaf children exposed to signed languages from birth, acquire these languages on an identical maturational time course as hearing children acquire spoken languages. Deaf children acquiring signed languages do so without any modification, loss, or delay to the timing, content, and maturational course associated with reaching all linguistic milestones observed in spoken language.Beginning at birth, and continuing through age 3 and beyond, speaking and signing children exhibit the identical stages of language acquisition.
These include the (a) “syllabic babbling stage” (7-10 months) as well as other developments in babbling, including “variegated babbling,” ages 10-12 months, and “jargon babbling,” ages 12 months and beyond, (b) “first word stage” (11-14 months), (c)”first two-word stage” (16-22 months), and the grammatical and semantic developments beyond.Communicative gestures versus language. Surprising similarities are also observed in deaf and hearing children’s timing onset and use of gestures as well. Signing and speaking children produce strikingly similar pre-linguistic (9-12 months) and post-linguistic communicative gestures (12-48 months). Deaf babies do not produce more gestures, even though linguistic “signs” (identical to the “word”) and communicative gestures reside in the same modality, and even though some signs and gestures can be formationally and referentially similar. Instead, deaf children consistently differentiate linguistic signs from communicative gestures throughout development, using each in the same ways observed in hearing children.
Throughout development, signing and speaking children also exhibit remarkably similar complexity in their language utterances, as well as similar types of gestures.The discovery of manual babbling. The regular onset of vocal babbling – the CV (consonant-vowel) alternation “bababa” and other repetitive, syllabic sounds that infants produce – has led researchers to conclude that babbling represents the “beginning” of human language acquisition (specifically, language production). Babbling – and thus early language acquisition in our species – was said to be determined by the development of the anatomy of the vocal tract and the neuroanatomical and neurophysiological mechanisms subserving the motor control of speech production.
The Discovery of Babbling on the Human Hand
In the course of conducting research on deaf infants’ transition from pre-linguistic gesturing to first signs (9-12 months), Petitto first discovered a class of hand activity that contained linguistically-relevant units that was different from all other hand activity at this time. Her data showed that deaf infants appeared to be babbling with their hands. Additional studies were undertaken to understand the basis of this extraordinary behavior. The findings that were reported in Science revealed unambiguously a discrete class of hand activity in deaf infants that was structurally identical to vocal babbling observed in hearing infants.Like vocal babbling, manual babbling was found to possess:(i) Have a restricted set of phonetic units (unique to signed languages)(ii) Possess syllabic organization(iii) Be used without meaning or reference.This hand activity was also wholly distinct from all infants’ rhythmic hand activity, be they deaf or hearing. Even its structure was wholly distinct from all infants’ communicative gestures.The discovery of babbling in another modality challenged our conception of the nature of language as being tied to speech. In this radical discovery, it pulled apart speech and language, suggesting that they were not one and the same thing.The discovery of babbling in another modality further confirmed Petitto’s hypothesis that babbling represents a distinct and critical stage in the ontogeny of human language. However, it disconfirmed existing hypotheses about why babbling occurs, i.e., that babbling is neurologically determined by the maturation of the speech-production mechanisms, per se.Specifically, it was thought that the “bababa,” that infants produce is determined by the rhythmic opening and closing of the mandible (jaw). But manual babbling is also produced with rhythmic, syllabic (open-close, hold-movement hand) alternations. Subsequent studies were conducted to examine the physical basis of this extraordinary phenomenon. (See Petitto & Marentette, 1991, Babbling in the manual mode: Evidence for the ontogeny of language. Science).
The Physics of Manual Babbling: The OPTOTRAK Study
Where do the common structures in vocal and manual babbling come from? Is manual babbling really different from all babies’ other rhythmic hand movements and early hand gestures? Petitto hypothesized that the common structure observed across manual and vocal babbling is due to the existence of “supra-modal constraints,” with the rhythmic-temporal oscillations of babbling being key. Both types of babbling are produced in rhythmic, temporally-oscillating bundles (though their absolute HZ are not identical and, crucially, need not be so due to predicted structure and modality-specific differences), which may, in turn, be yoked to constraints on the infant’s perceptual systems.
Petitto’s study of manual babbling was conducted with McGill colleague David Ostry, and students Siobhan Holowka, Lauren Sergio, and Bronna Levy,using the “OPTOTRAK Computer Visual-Graphic Analysis System. The precise physical properties of all infants’ manual activity were measured by placing tiny Light-Emitting Diodes (LEDs) on infants’ hands and feet (control site).
The LEDs transmitted light impulses to cameras that, in turn, sent signals into the OPTOTRAK system. This information was then fed into the computer software that Petitto and her colleagues designed to provide information analogous to the spectrographic representation of speech (“Speech Spectrogram’), but adapted here for the spectrographic representation of sign. For the first time, researchers were able to obtain recordings of the timing, rate, path movement, velocity, and “fo” for all infant hand activity, and to obtain sophisticated, 3-D graphic displays of each.
In the video clip below, you will first see a 10 month old SPEECH-exposed hearing boy producing one example of the high frequency hand movements mentioned in the above article. Second, and following the first, you will see a 10 month old SIGN-exposed hearing girl producing one example of the low frequency hand movements. While the naked eye using videotape analysis alone may impart the erroneous perception that the two babies’ hand movements are the same, the Optotrak technology reveals the stunning ways in which these two babies’ hand movements are systematically different, and it further reveals the linguistic principles that bind one class of hand movements (i.e., the baby girl’s) but not the other (i.e., the baby boy’s).The work was published in Nature.
The Discovery of Babbling and the Brain’s Laterality in Young Babies
The question of whether baby babbling is fundamentally linguistic (reflecting the rudimentary elements of language) or just exercising motor activity (practicing the mechanics of mouth movements) has been an enduring question in science. In a 2002 paper published in Science (Holowka & Petitto, 2002, Left hemisphere cerebral specialization for babies while babbling). Petitto and graduate student, Holowka, report the discovery of a strong link between VOCAL babbling in HEARING babies and the language processing centers of the brain. Studying typically-developing hearing babies of hearing parents (~ age 5 months, French group and English group), they found that babies babble with a greater mouth opening in the RIGHT side of their mouths, indicating LEFT brain hemisphere control. They also found that all babies had EQUAL mouth opening for vocalizations that were NON-BABBLES, and, interestingly, LEFT mouth asymmetry for SMILES (indicating right hemispheric control). Not only do these findings link babbling to the language centers in the left side of the brain, but, for the first time, the results also suggested that a basic expression of emotion, such as smiling, is linked to the right hemisphere’s emotional centers in the brain, just like adults.