Infants are born with a preference for listening to speech over non-speech, and with a set of perceptual sensitivities that enable them to discriminate most of the speech sound differences used in the world's languages, thus preparing them to acquire any language. By 10-months of age infants become experts at perceiving their native language. This involves improvements in discrimination of native consonant contrasts, but more importantly for this grant, a decline in discrimination of non-native consonant distinctions. In the adult, speech perception is richly multimodal. What we hear is influenced by visual information in talking faces, by self-produced articulations, and even by external tactile stimulation. While speech perception is also multisensory in young infants, the genesis of this is debated. According to one view, multisensory perception is established through learned integration: seeing and hearing a particular speech sound allows learning of the commonalities in each. This grant proposes and tests the hypothesis that infant speech perception is multisensory without specific prior learning experience. Debates regarding the ontogeny of human language have centered on the issue of whether the perceptual building blocks of language are acquired through experience or whether they are innate. Yet, this nature vs. nurture controversy is rapidly being replaced by a much more nuanced framework. Here, it is proposed that the earliest developing sensory system - likely somatosensory in the case of speech, including somatosensory feedback from oral-motor movements that are first manifest in the fetus, provides an organization on which auditory speech can build once the peripheral auditory system comes on-line by 22 weeks gestation. Heard speech, both of the maternal voice via bone conduction and of external (filtered) speech through the uterus, is organized in part by this somatosensory/motor foundation. At birth, when vision becomes available, seen speech maps on to this already established foundation. These interconnected perceptual systems, thus, provide a set of parameters for matching heard, seen, and felt speech at birth. Importantly, it is argued that these multisensory perceptual foundations are established for language-general perception: they set in place an organization that provides redundancy among the oral-motor gesture, the visible oral-motor movements, and the auditory percept of any speech sound. Hence, specific learning of individual cross-modal matches is not required. Our thesis, then, is that while multisensory speech perception has a developmental history (and hence is not akin to an 'innate'starting point), the multisensory sensitivities should be in place without specific experience of specific speech sounds. Thus multisensory processing should be as evident for non-native, never-before-experienced speech sounds, as it is for native and hence familiar ones. To test this hypothesis against the alternative hypothesis of learned integration, English infants will be tested on non-native, or unfamiliar speech sound contrasts, and will be compared to Hindi infants, for whom these contrasts are native. Four sets of experiments, each using a multi-modal Distributional Learning paradigm, are proposed. Infants will be tested at 6-months, an age at which they can still discriminate non-native speech sounds, and at 10-months, an age after they begin to fail. It is proposed that if speech perception is multisensory without specific experience, the addition of matching visual, tactile, or motor information should facilitate discrimination of a non-native speech sound contrast at 10-months, while the addition of mismatching information should disrupt discrimination at 6-months. If multisensory speech perception is learned, this pattern should be seen only for Hindi infants, for whom the contrasts are familiar and hence already intersensory.
The Specific Aims are to test the influence of: 1) Visual information on Auditory speech perception (Experimental Set 1);2) Oral-Motor gestures on Auditory speech perception (Experimental Set 2);3) Oral- Motor gestures on Auditory-Visual speech perception (Experimental Set 3);and 4) Tactile information on Auditory speech perception (Experimental Set 4). This work is of theoretical import for characterizing speech perception development in typically developing infants, and provides a framework for understanding the roots of possible delay in infants born with a sensory or oral-motor impairment. The opportunities provided by, and constraints imposed by an initial multi-sensory speech percept allow infants to rapidly acquire knowledge from their language-learning environment, while a deficit in one of the contributing modalities could compromise optimal speech and language development.

Public Health Relevance

While increasing evidence shows that even in young infants, visual, motor, and somatosensory information contributes to the perception of spoken language, the origins of these multisensory sensitivities are debated. This grant tests the hypothesis that auditory speech perception is based on multisensory foundations from early in life, against the alternative and more commonly posited hypothesis of learned integration. The proposed studies will not only advance our theoretical understanding of speech and language development, but will have implications for intervention in language disorder.

Agency
National Institute of Health (NIH)
Institute
Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD)
Type
Exploratory/Developmental Grants (R21)
Project #
5R21HD079260-02
Application #
8720041
Study Section
Special Emphasis Panel (ZRG1-BBBP-T (52))
Program Officer
Freund, Lisa S
Project Start
2013-08-12
Project End
2015-05-31
Budget Start
2014-06-01
Budget End
2015-05-31
Support Year
2
Fiscal Year
2014
Total Cost
$141,692
Indirect Cost
$10,496
Name
University of British Columbia
Department
Type
DUNS #
251949962
City
Vancouver
State
BC
Country
Canada
Zip Code
V6 1-Z3
Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya et al. (2017) The organization and reorganization of audiovisual speech perception in the first year of life. Cogn Dev 42:37-48
Havy, Mélanie; Foroud, Afra; Fais, Laurel et al. (2017) The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood. Child Dev 88:2043-2059
Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya et al. (2015) Sensorimotor influences on speech perception in infancy. Proc Natl Acad Sci U S A 112:13531-6