Understanding spoken language involves a cascade of processes that allow us to perceive speech sounds, identify the words that are being spoken, connect them together to determine the meaning of the sentence, and use the conversational context to infer the message the speaker is trying to convey. In adults, these processes are linked together such that perception at a lower level is shaped by expectations formed at higher levels. For example, we identify a word not only on the basis of the sounds we hear (phonological input) but also based on the meaning of the sentence, and conversation, in which it occurs. These high-level constraints help us anticipate how a sentence will continue as it unfolds in real time. This ability is critical for fluent language comprehension, but we are just beginning to understand how it develops, in part because the paradigms commonly used in adults involve listening to long lists of unrelated sentences with no clear goal in mind. The proposed project addresses this vital gap by developing a new child-friendly paradigm for studying comprehension using event-related potentials (ERPs) recorded during a natural listening task (ERP's are measures of brain activity). Children listen to a story as ERPs time-locked to the onset of every word are recorded. This allows for the collection of a large amount of data in a short time in an ecologically-valid and fun task. The proposed experiments use this task to study a brain signature of word recognition (the N400) to compare the degree to which word recognition depends on properties of the word (e.g., its frequency) as opposed to high-level expectations (e.g., the predictability of the word in context). Exp. 1 tracks how the use of these two constraints changes between 5-6 years of age and adulthood and how these skills relate to language ability and literacy. Exp. 2 adapts the task for preschool-aged children to determine if they also use contextual cues. Exp. 3 explores how the use of context in word identification is affected by errors in the sentence, in a design that combines the natural listening task with a tightly controlled experimental manipulation. The paradigm developed in the proposal could be applied to a wide variety of questions about language comprehension and used in clinical populations that are difficult to study with traditional designs. Tracing the development of moment-to-moment language comprehension is central to understanding how children become fluent listeners. This is an essential first step for identifying the atypical patterns of development that characterize disorders such as specific language impairment, autism, and dyslexia. Because literacy builds on oral language, this work may also ultimately inform educational interventions.
This project develops a new paradigm for studying moment-to-moment language comprehension by recording event-related potentials (ERPs) as a child listens to a story. This sensitive and simple task will allow researchers to address a broad range of experimental questions in groups, such as preschool-aged children and clinical populations that are not motivated to sit through artificial laboratory tasks. The proposed studies use this paradigm to track the development of lexical processing in typical children from age 3 through fifteen, which is an essential first step for identifying and understanding the atypical patterns of development that characterize disorders such as specific language impairment, autism, and dyslexia.