The two primary channels of communication in primate species are the face and the voice. Normally, facial gestures and accompanying vocalizations provide highly redundant information because they are always spatially co-located, temporally synchronous, and specified by overlapping patterns of dynamic audible and visible information. Adults profit from the multisensory redundancy that vocalizing faces offer because they use the redundancy to integrate the separate auditory and visual streams of information into coherent perceptual experiences and because the redundancy makes it easier to detect, learn, and remember vocalizing faces. Although extant evidence indicates that the ability to take advantage of multisensory redundancy emerges early in infancy, the processes underlying its emergence are still poorly understood. Consequently, the current project investigates the development of face-voice integration in human infants and explores the novel idea that perceptual narrowing plays a critical role in the development of intersensory integration. This novel idea is based on a recent discovery by the PI &his colleagues that younger infants can integrate a broader range of faces and voices than do older infants. We have found that young infants can integrate nonnative faces and vocalizations (either monkey faces and their vocalizations or foreign visible and audible speech sounds) but that older infants do not. Our findings are striking because they are counterintuitive and contrary to predictions arising from extant theories of intersensory development. They are, however, consistent with a growing literature on narrowing effects in the speech, face, and music processing domains and suggest that narrowing is a pan-sensory, domain-general process. Consequently, the current project will investigate intersensory perceptual narrowing (IPN). It will be guided by 5 specific aims whose purpose will be to shed light on: (1) the role of temporal intersensory synchrony, configural processing, gestural features, and language-general processes in IPN, (2) the general nature of IPN by testing for it across different types of integration tasks, (3) the role of experience in IPN, (4) the separate contribution of unisensory perceptual processes to IPN, and (5) whether the sensitive period for IPN is affected by multisensory redundancy. We will use the intersensory matching technique to examine face-voice integration, the habituation/test method to investigate learning and discrimination of faces and voices, and measure infant visual attention to index responsiveness. By shedding light on the perceptual and experience-dependent mechanisms that underlie the development of face-voice integration in infancy, this project will provide insights into the emergence of a critical communicative skill that is essential to adaptive functioning at the perceptual, cognitive, and social levels. This, in turn, will provide insights into the etiology of various developmental disorders (e.g., autism) and will facilitate the development of better diagnostic tools for such disorders and help develop better intervention methods to ameliorate these disorders.

Public Health Relevance

Autism, whose hallmark is the inability of children with this developmental disability to respond to the faces and voices of people as sources of social communication, has been growing at an alarming rate of 10-17 percent per year. Because autism and related communication and learning disorders are developmental in nature, the earlier they are diagnosed the more effectively they can be ameliorated and/or prevented. The current project will investigate the development of some critical perceptual skills that enable infants to respond to people

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Research Project (R01)
Project #
5R01HD057116-03
Application #
8434853
Study Section
Cognition and Perception Study Section (CP)
Program Officer
Freund, Lisa S
Project Start
2011-03-01
Project End
2016-02-28
Budget Start
2013-03-01
Budget End
2014-02-28
Support Year
3
Fiscal Year
2013
Total Cost
$289,005
Indirect Cost
$87,342
Name
Florida Atlantic University
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
004147534
City
Boca Raton
State
FL
Country
United States
Zip Code
33431
Lewkowicz, David J; Schmuckler, Mark A; Mangalindan, Diane M J (2018) Learning of hierarchical serial patterns emerges in infancy. Dev Psychobiol 60:243-255
Hillairet de Boisferon, Anne; Tift, Amy H; Minar, Nicholas J et al. (2018) The redeployment of attention to the mouth of a talking face during the second year of life. J Exp Child Psychol 172:189-200
Hillairet de Boisferon, Anne; Tift, Amy H; Minar, Nicholas J et al. (2017) Selective attention to a talker's mouth in infancy: role of audiovisual temporal synchrony and linguistic experience. Dev Sci 20:
Barenholtz, Elan; Mavica, Lauren; Lewkowicz, David J (2016) Language familiarity modulates relative attention to the eyes and mouth of a talker. Cognition 147:100-5
Murray, Micah M; Lewkowicz, David J; Amedi, Amir et al. (2016) Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 39:567-579
Pons, Ferran; Bosch, Laura; Lewkowicz, David J (2015) Bilingualism modulates infants' selective attention to the mouth of a talking face. Psychol Sci 26:490-8
Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H et al. (2015) Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience. J Exp Child Psychol 130:147-62
Lewkowicz, David J (2014) Early experience and multisensory perceptual narrowing. Dev Psychobiol 56:292-315
Barenholtz, Elan; Lewkowicz, David J; Davidson, Meredith et al. (2014) Categorical congruence facilitates multisensory associative learning. Psychon Bull Rev 21:1346-52
Pons, Ferran; Lewkowicz, David J (2014) Infant perception of audio-visual speech synchrony in familiar and unfamiliar fluent speech. Acta Psychol (Amst) 149:142-7

Showing the most recent 10 out of 19 publications