Despite tremendous progress in cochlear implant (CI) technology and performance over the past three decades, speech perception through a CI remains considerably poorer than in normal hearing (NH), particularly in noisy backgrounds. Similar difficulties are experienced by hearing-impaired (HI) listeners, even after hearing-aid fitting. The long-term goal of this research is to improve auditory and speech perception via CIs and hearing aids, through a greater understanding of the basic mechanisms that contribute to, and limit, the perception of speech in challenging acoustic environments.
The first aim studies auditory enhancement and context effects - similar to the negative afterimage and color constancy effects in vision. These effects may help "normalize" the incoming sound and produce perceptual invariance in the face of the widely varying acoustics produced by different rooms, different talkers, and different acoustic backgrounds. Little is known about these effects in either HI or CI listeners. If enhancement and context effects can be measured in CI users, the results will provide evidence against recent theories based on efferent control of cochlear gain. If HI and/or CI listeners do not exhibit enhancement effects, then our results will guide novel signal processing schemes that will "recreate" important aspects of perceptual normalization through signal processing.
The second aim tests the hypothesis that spectral resolution helps segregate sounds with different spectral content, and that the usual measures of speech perception in spectrally matched noise underestimate the importance of spectral resolution in many real-world situations, where masking sounds are not usually matched to target sounds. We will then investigate the interaction between spectral resolution and time-dependent spectral gain changes based on the findings from Aim 1. The results should lead to the development of new clinical measures of speech perception that provide information regarding static and dynamic aspects of spectral resolution, and predict performance in everyday listening conditions. These measures will be used to guide and validate the new signal processing schemes developed under Aim 1. In the second part, we will test the hypothesis that the temporal envelope fluctuations inherent in "steady" noise play an important role in limiting speech perception of CI listeners. We will measure speech perception in noise and compare it to speech perception in broadband maskers that produce no amplitude fluctuations. A parametric study of the effect of different temporal amplitude modulation frequency bands on speech perception, in the absence of spurious inherent noise fluctuations will provide estimates in CI users of the relative contributions of different amplitude frequencies to speech masking. Overall, the proposed work will further our fundamental knowledge about dynamic aspects of spectral and temporal processing in normal, impaired, and electric hearing, and will lead to new approaches in signal processing that hold the promise of improving speech perception for hearing- impaired and CI listeners in complex and varying acoustic backgrounds.
The cochlear implant is the world's most successful neural prosthesis. Cochlear implants allow people with severe hearing loss or deafness to regain their hearing and communicate more freely with others. Despite their success, neither cochlear implants nor hearing aids typically provide good speech understanding in noisy environments. Loss of communication leads to social and intellectual isolation and poorer economic prospects. This project will deepen our understanding of factors currently limiting the performance of cochlear implants and hearing aids, and will design and test new signal processing algorithms that should improve the ability of hearing-impaired people to communicate with others in everyday situations.
|Oxenham, Andrew J; Kreft, Heather A (2014) Speech perception in tones and noise via cochlear implants reveals influence of spectral resolution on temporal processing. Trends Hear 18:|
|Wang, Ningyuan; Oxenham, Andrew J (2014) Spectral motion contrast as a speech context effect. J Acoust Soc Am 136:1237|