The proposed research will examine infants'sensitivity to multiple spectral cues in speech perception. Speech is a complex signal, with numerous acoustic cues for every consonant and vowel. Through experience, listeners learn to exploit correlations between multiple cues, making perception more efficient and robust. Adults display this knowledge through their use of cues that are spectrally local (e.g. formant transitions) as well as distributed (e.g. gross spectral shape, or tilt) to distinguish speech sounds. Although prior research has demonstrated infants'ability to distinguish many speech sounds, how they distinguish these sounds is unclear. Infants'sensitivity to individual cues, the relative importance they assign to each cue, and when they learn to exploit correlations between cues all remain poorly understood. We propose four experiments to address these questions.
The first aim of this project is to investigate sensitivity to spectral cues. To address this issue, the first two experiments will examine speech perception when the natural covariance between two cues is violated or maintained. The latter two experiments will test sensitivity to changes in individual cues.
The second aim i s to assess relative cue salience across the lifespan. Results from the first experiment will be compared to existing data from normal-hearing and hearing-impaired elderly adult listeners who completed a similar task (Alexander &Kluender, in press;in preparation) to assess differential effects of listening experience and hearing health on perception of the same speech stimuli. We hypothesize that both 6-to-7-month-old and 11-to-12-month-old-infants will exhibit perceptual sensitivity to both spectral cues in speech perception, with 11-to-12-month-olds displaying the greatest sensitivity. We also hypothesize that younger infants may be relatively more influenced by spectrally global (e.g. tilt) cues than older infants. Our long-term objective is to better understand development of speech perception in infants with normal and compromised hearing. These results will help refine treatment methods for infants with hearing loss, and inform the use of devices such as hearing aids and cochlear implants. Public health statement: The proposed research will reveal the acoustic information infants use to distinguish speech sounds, and whether this information is used differently by adults. Knowing what infants listen for will benefit efforts to facilitate development of speech perception in children with compromised hearing. This research will also investigate when infants learn to exploit statistical regularities between cues in natural speech.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Predoctoral Individual National Research Service Award (F31)
Project #
5F31DC009532-02
Application #
7655501
Study Section
Communication Disorders Review Committee (CDRC)
Program Officer
Cyr, Janet
Project Start
2008-07-01
Project End
2011-06-30
Budget Start
2009-07-01
Budget End
2010-06-30
Support Year
2
Fiscal Year
2009
Total Cost
$27,416
Indirect Cost
Name
University of Wisconsin Madison
Department
Pediatrics
Type
Other Domestic Higher Education
DUNS #
161202122
City
Madison
State
WI
Country
United States
Zip Code
53715
Stilp, Christian E; Goupell, Matthew J; Kluender, Keith R (2013) Speech perception in simulated electric hearing exploits information-bearing acoustic change. J Acoust Soc Am 133:EL136-41
Stilp, Christian E; Kluender, Keith R (2012) Efficient coding and statistically optimal weighting of covariance among acoustic attributes in novel sounds. PLoS One 7:e30845
Stilp, Christian E; Kluender, Keith R (2011) Non-isomorphism in efficient coding of complex sound properties. J Acoust Soc Am 130:EL352-7
Stilp, Christian E (2011) The redundancy of phonemes in sentential context. J Acoust Soc Am 130:EL323-8
Stilp, Christian E; Alexander, Joshua M; Kiefte, Michael et al. (2010) Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets. Atten Percept Psychophys 72:470-80
Stilp, Christian E; Rogers, Timothy T; Kluender, Keith R (2010) Rapid efficient coding of correlated complex acoustic properties. Proc Natl Acad Sci U S A 107:21914-9
Stilp, Christian E; Kluender, Keith R (2010) Cochlea-scaled entropy, not consonants, vowels, or time, best predicts speech intelligibility. Proc Natl Acad Sci U S A 107:12387-92
Stilp, Christian E; Kiefte, Michael; Alexander, Joshua M et al. (2010) Cochlea-scaled spectral entropy predicts rate-invariant intelligibility of temporally distorted sentences. J Acoust Soc Am 128:2112-26