The goal is to develop a model to explain how listeners perceive speech in the face of extreme context-sensitivity resulting from co-articulation. The ability of listeners to recover speech information, despite dramatic articulatory and acoustic assimilation between adjacent speech sounds, is remarkable and central to understanding speech perception. Recent results from this laboratory suggest that it may be possible to define and model quite general processes of auditory perception and learning which may provide a significant part of the explanation for findings demonstrating a correspondence between speech perception and production with perseverative co-circulation. To the extent do general auditory processes serve to accommodate acoustic consequences of co- articulation? More generally, does perceptual (spectral) contrast serve in part or whole to undo the assimilative nature of co-articulation and its acoustic consequences? More specifically, do general psychoacoustic enhancement effects, presumably due to relatively peripheral simple neural mechanisms, serve to provide spectral contrast for complex sounds such as speech? In addition to such basic auditory effects, what is the relative contribution of experience (learning) with co-articulated speech? What are the prospects for relatively elegant algorithms for signal processing that provide a desirable adjunct to amplification for hearing- impaired listeners? Toward these ends, studies with non-human animals provide a model of auditory processing unadulterated by experience with speech as well as a model to investigate the role of experience with co- variation among acoustic attributes consequent to co-articulated production. Here, animals permit precise control of experience with speech in establishing whether learning plays a significant role in accommodation of acoustic consequences of co-articulation while defining the relative contributions of experience and general auditory processes.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC004072-04
Application #
6634493
Study Section
Special Emphasis Panel (ZRG1-BBBP-7 (01))
Program Officer
Shekim, Lana O
Project Start
2000-04-01
Project End
2004-03-31
Budget Start
2003-04-01
Budget End
2004-03-31
Support Year
4
Fiscal Year
2003
Total Cost
$233,146
Indirect Cost
Name
University of Wisconsin Madison
Department
Psychology
Type
Schools of Arts and Sciences
DUNS #
161202122
City
Madison
State
WI
Country
United States
Zip Code
53715
Stilp, Christian E; Goupell, Matthew J; Kluender, Keith R (2013) Speech perception in simulated electric hearing exploits information-bearing acoustic change. J Acoust Soc Am 133:EL136-41
Stilp, Christian E; Kluender, Keith R (2012) Efficient coding and statistically optimal weighting of covariance among acoustic attributes in novel sounds. PLoS One 7:e30845
Alexander, Joshua M; Jenison, Rick L; Kluender, Keith R (2011) Real-time contrast enhancement to improve speech recognition. PLoS One 6:e24630
Alexander, Joshua M; Kluender, Keith R (2010) Temporal properties of perceptual calibration to local and broad spectral characteristics of a listening context. J Acoust Soc Am 128:3597-13
Stilp, Christian E; Alexander, Joshua M; Kiefte, Michael et al. (2010) Auditory color constancy: calibration to reliable spectral properties across nonspeech context and targets. Atten Percept Psychophys 72:470-80
Coady, Jeffry; Evans, Julia L; Kluender, Keith R (2010) Role of phonotactic frequency in nonword repetition by children with specific language impairments. Int J Lang Commun Disord 45:494-509
Stilp, Christian E; Rogers, Timothy T; Kluender, Keith R (2010) Rapid efficient coding of correlated complex acoustic properties. Proc Natl Acad Sci U S A 107:21914-9
Stilp, Christian E; Kluender, Keith R (2010) Cochlea-scaled entropy, not consonants, vowels, or time, best predicts speech intelligibility. Proc Natl Acad Sci U S A 107:12387-92
Stilp, Christian E; Kiefte, Michael; Alexander, Joshua M et al. (2010) Cochlea-scaled spectral entropy predicts rate-invariant intelligibility of temporally distorted sentences. J Acoust Soc Am 128:2112-26
Coady, Jeffry A; Evans, Julia L; Kluender, Keith R (2010) The role of phonotactic frequency in sentence repetition by children with specific language impairment. J Speech Lang Hear Res 53:1401-15

Showing the most recent 10 out of 19 publications