Identifying how infants learn words from the language they hear and the world around them, rather than through highly constrained in-laboratory measures alone, is crucial for understanding language acquisition. The proposed research involves videotaping infant-parent interactions monthly over the course of a year (between 6-18 months), during a critical time range for word learning. These same infants, along with others, will be tested in the laboratory during this period to examine their eye-movements and word learning in response to known and novel visual stimuli, in order to examine the effects of experience, familiarity, linguistic structure, speech- sounds, and semantic properties on word learning. The two specific aims of this project are as follows: 1. Determine how infants'daily experiences lead to word learning, with every day and novel words 2. Develop paradigms that query links between visual (perceptual) and linguistic (cognitive) processes. There are three main components of this project: an in-home video corpus, word learning experiments using eye tracking and elicitation methods, and further experiments and computational analyses using the corpus-generated videos. One key set of in-laboratory studies uses the infants who were recorded for the in-home corpus. Additionally, both sets of experiments include infants who do not partake in the corpus study, ensuring the collection of appropriate baselines and independent evaluation of predictors whose success does not rely on any specific outcome from the corpus work. The large video library resource created by this project (including technologically innovative head- mounted camera footage) would add a unique, freely available and sorely needed resource for the research community. This resource would be invaluable for scientists interested in aspects of infants'early auditory and visual home environment. While limited infant video corpora exist, the sort of rich resource we propose to create is currently unavailable. The paradigms developed for analyzing visual saliency and conceptual/ perceptual learning would add to the arsenal of experimental methods available to cognitive scientists. More broadly, the outcomes of the corpus analyses, word learning, and visual saliency experiments will increase our understanding of word learning in specific, and language in general. This is a critical aim not just for language acquisition research, but for the fields of infant development, cognitive science, psychology, linguistics, pediatrics, and speech-language pathology as well. This project provides an opportunity for advancing fundamental theory in language acquisition by establishing a normative baseline, which, in turn will allow for improved assessments of infants from disadvantaged backgrounds, and for infants with auditory, visual, and language-based impairments, such as Autism, as an essential part of future research.
Identifying how infants learn words from their linguistic and environmental input, rather than through in-laboratory studies alone, is crucial for understanding language development in the first two years of life. Establishing normative baselines for how infants learn words has immediate and direct applicability to two populations: children from low socioeconomic status homes, and children with an Autism Spectrum Disorder (ASD), both of which previous work has demonstrated show delayed or impaired language abilities, on average, relative to typically-developing children. This program of research will be especially informative for tailoring more accurate assessment and support at younger ages for these populations, by uncovering which aspects of infants'visual and auditory input drive their early word learning.