Although human language requires multi-sensory processing of information, much of the research on children's language development focuses on the auditory-only speech signal. This is in spite of the fact that the speaking face provides surprising amounts of information that observers use when processing language. Although behavioral evidence is beginning to emerge about the degree to which preverbal infants can coordinate complex visual and auditory information, there is relatively little neurophysiological data to inform our understanding of how this coordination process develops and how it influences the neural underpinnings of language processing at different developmental time points and in different infant populations. The proposed experiments will test the hypothesis that, while infants may be predisposed to process auditory speech in the left temporal region, this processing is influenced by environmental experience, such as that provided by increasingly extensive exposure to visual speech or to more than one language. Our investigation will examine the role of visual- and auditory-speech both separately and in coordination (audiovisual speech) in order to understand how these distinct sources of perceptual information facilitate the development of language processing abilities in preverbal infants. First, we propose to use near-infrared spectroscopy to test the influence of isolated visual- and auditory-speech on patterns of neural activity in the bilateral temporal cortices of 9-month-old infants, and compare that to the activity observed in response to coordinated audiovisual speech (Aim 1). We will then compare the neural activity elicited by these three speech conditions across three age groups (6-, 9-, and 12-month-olds) to track the developmental trajectory the coordination process follows (Aim 2). Finally, we will compare the bilateral processing patterns of monolingual (English-exposed) infants with age-matched bilingual (Spanish/English-exposed) infants (Aim 3). This would be the first study to demonstrate the privileged nature of audiovisual, speech in early language processing, as reflected by a more robust neurovascular response in the left temporal region relative to the right when audio- or visual-speech are presented in isolation. We expect to find that this effect is experientially based, such that there is a measurable tuning process that infants go through that is specific to their amount of prior exposure to coordinated speech. Findings from the studies outlined here will help us better understand how the auditory and visual systems interact to influence early language development, as well as the normal time course of perceptual tuning to coordinated speech in one's native language(s).

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Predoctoral Individual National Research Service Award (F31)
Project #
1F31DC009765-01A1
Application #
7677188
Study Section
Communication Disorders Review Committee (CDRC)
Program Officer
Cyr, Janet
Project Start
2009-02-01
Project End
2012-01-31
Budget Start
2009-02-01
Budget End
2010-01-31
Support Year
1
Fiscal Year
2009
Total Cost
$32,491
Indirect Cost
Name
Texas A&M University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
078592789
City
College Station
State
TX
Country
United States
Zip Code
77845
Fava, Eswen; Hull, Rachel; Baumbauer, Kyle et al. (2014) Hemodynamic responses to speech and music in preverbal infants. Child Neuropsychol 20:430-48
Nath, Audrey R; Fava, Eswen E; Beauchamp, Michael S (2011) Neural correlates of interindividual differences in children's audiovisual speech perception. J Neurosci 31:13963-71