A key factor underlying language acquisition is the child's ability to connect linguistic input with the surrounding visual world. In the case of deaf children, who perceive both linguistic and non-linguistic input through the visual mode, this poses a unique challenge. Deaf children must learn to alternate their gaze and attention in order to integrate language and visual information. Little is known about how deaf children acquire the gaze control needed to associate object labels with their referents specifically, or to perceive language more generally. The overall objective of this project is to investigate the developmental time course through which deaf children learn to integrate information in the visual modality, and to probe the relationship between parental input, children's development of gaze control, and children's language ability. Study participants include deaf children between the ages of 18 to 60 months and their mothers. Children will be divided based on the type of linguistic input they are receiving-children receiving ASL from birth from deaf parents (native signers), and children receiving ASL input from hearing parents and/or from early intervention professionals (non-native signers). This study employs an integrated approach involving naturalistic observation, semi-structured tasks, and laboratory controlled eye-tracking experiments in order to investigate visual attention at multiple scales. First, naturalistic interactions between children and their mothers will recorded to examine parental cues that prompt children's shift of attention between objects and people, as well as children's developing control of their own gaze. Second, novel word learning in deaf children will be investigated through a semi-structured task. Parent-child dyads will be given sets of novel objects, and parents will be fed novel labels for the objects to use during the session. Child and parent gaze and object handling will be recorded using head-mounted cameras in order to capture both the child's and the parent's view. Both the sensory dynamics of these naming events as well as children's success in learning novel object names will be assessed. Third, children's ability to integrate linguistic and visual information during linguistic processing will be investigated through a series of studies using automated eye-tracking technology. Using a novel adaptation of the visual world paradigm, children's gaze patterns will be recorded during real-time processing of ASL signs and sentences. Together, these studies will advance theoretical understanding of the social dynamics of parent-child interaction, joint attention, and word learning across modalities. Findings will also have direct implications for designing intervention programs and classroom instruction for deaf children. By revealing how deaf children process and integrate information through the visual mode, learning environments from early intervention through classroom settings can be designed to accommodate deaf children's visual needs.

Public Health Relevance

Deaf children frequently receive impoverished language input in their early years, putting them at-risk for delays in development of language, literacy and academic skills. The proposed work will examine how young deaf children learn to integrate visual linguistic and non-linguistic input during interaction, a critical skill for language development. By identifying factors contributing to deaf children's ability to control gaze and visual attention, this work will yield important insights regarding the design of effective intervention, instruction, and classroom environments for deaf children.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Cooper, Judith
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Boston University
Schools of Education
United States
Zip Code
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I (2018) Prediction in a visual language: real-time sentence processing in American Sign Language across development. Lang Cogn Neurosci 33:387-401
Lieberman, Amy M; Borovsky, Arielle; Hatrak, Marla et al. (2016) Where to look for American Sign Language (ASL) sublexical structure in the visual world: Reply to Salverda (2016). J Exp Psychol Learn Mem Cogn 42:2002-2006