The National Science Foundation’s INSPIRE grant team, headed by Dr. Laura-Ann Petitto, PI, convened its fall 2018 international meeting at Gallaudet University in the Petitto Brain and Language Laboratory for Neuroimaging (BL2) on Sunday, September 16 – Thursday, September 20, 2018. The NSF research team will be discussing their revolutionary language learning tool, called the Robot AVatar thermal-Enhanced prototype, or “RAVE.”
The RAVE language learning tool is designed to provide language to young babies, deaf and hearing (ages 6 to 12 months), with minimal language exposure during critical periods of brain sensitivity when young babies need to encounter the rhythmic temporal patterns of their native language. Exposure to these language patterns in early life contributes to a baby’s acquisition of its language’s phonetic-syllabic units (be it signed or spoken), vocabulary, syntactic and morphological regularities—all vital to early reading success! RAVE involves an embodied robot that directs attention to an avatar that produces American Sign Language and other social gestures and communicative behaviors.
Two features make RAVE unique in Science and Translation: (1) RAVE provides rhythmic temporal language patterns in ASL that the human infant brain needs to encounter at just the right period in development when the baby needs it most. (2) RAVE uses advanced Artificial Intelligence (AI) technology and computational algorithms that permit RAVE’s artificial avatar agent the remarkable capacity to produce language conversations and social interactions with a human baby that are socially contingent on(that is, meaningfully related to) the baby’s emotional and attentional level of interest (as measured with the innovative technology called Thermal Infrared Imaging)—even before the baby is actually capable of producing language!
Aims: The aim of the fall 2018 meeting is to explore new tests and analyses to conduct on collected experimental data in order to better understand RAVE’s ability to facilitate language learning in young deaf and hearing babies; to design new experiments to study and test RAVE’s potential to positively impact infant language learning; to explore new variations of RAVE’s AI and thermal IR imaging technology that we can build to improve it; to discuss exciting directions for dissemination (including next-step publications and conferences); to discuss potential Big Data testing of RAVE involving the Cloud as well as possible site testing in homes and schools; and, of course, to explore new grants to write!
Teams: Interdisciplinary teams contribute diverse expertise to RAVE science.
- Petitto (PI, and Project Head) and the BL2 team conducted the functional near infrared spectroscopy (fNIRS) brain imaging studies to discover infants’ sensitivity to specific rhythmic temporal patterns in language and at what age in human development. BL2 provided the specific rhythmic temporal frequencies on which the Avatar’s signed language stimuli were built, as well as information relevant to RAVE’s Thermal IR imaging measurements and predictions (see Merla description below). BL2 was also home to all experiments with babies interacting with RAVE and the BL2 team conducted the extensive analyses of the differential impact that the Avatar and the robot have on facilitating language learning in young babies.
- At Gallaudet University, Melissa Malzkuhn and Jason Lamberton and their team collaborate with Petitto’s BL2 through Malzkuhn’s creation of the ASL nursery rhymes as per the rhythmic temporal patterns important in young infant brain development. Like a lock and key, the goal was to build Avatar signed language stimuli that contained just those rhythmic temporal frequencies to which infant brains are most sensitive specifically within ages 6 to 12 months. Malzkuhn is the Creative Director of the Motion Light Lab (ML2), with its advanced Motion Capture Studio. Jason Lamberton in ML2 pioneered solutions in avatar creation that rendered the avatar to sign ASL with greater fidelity, for example, involving more natural and accurate grammatical facial expressions in ASL, the representation and non-occlusion of signing hands moving in space, as well as greater fluidity of avatar movements, and more. Both Malzkuhn’s ML2 and Petitto’s BL2 are Research Hubs in the NSF-Gallaudet Science of Learning Center, called Visual Language and Visual Learning, VL2.
- Dr. David Traum (Project Head), graduate student Setareh Nasihati Gilani, and Dr. Ari Shapiro, are the Virtual Human/Avatar Scientists from the University of Southern California. Traum and Nasihati Gilani pioneered the Avatar’s social dialogue system and Shapiro contributed to building important components of the Avatar.
- Dr. Arcangelo Merla (Project Head), graduate student Chiara Filippini and team are from the Università Gabriele D’Annunzio in Chieti-Pescara, Italy.Dr. Merla is the Applied Psychophysiologist and Bio-Medical Engineer who pioneered both the Thermal Infrared Imaging and the Face Imaging systems. Both are vital to RAVE’s ability to discern a child’s emotional interest, which, in turn, provides the essential triggering of the Avatar to start and stop a conversation, thereby permitting the first time achievement of rudimentary socially contingent conversations between an artificial agent and a human baby.
- Dr. Brian Scassellati (Project Head; not present at this research stage) is the Robotics Scientist from Yale University. He and his team built the robot component of RAVE.
Professor Petitto is thankful to her collaborators for traveling small and great distances for the NSF INSPIRE conference. The work completed by the team focused on what is next for RAVE, including how to best disseminate the results of the transformative learning tool system and its data to the scientific and general community. Contact BL2 to join its mailing list to learn more about our exciting work!