Most speech conversations occur in the presence of competing sounds and acoustic reflections (reverberation). Hearing-impaired people often complain of difficulties understanding speech in such complex acoustic environments even if they do well in quiet. We propose neurophysiological and computational studies that address fundamental questions about listening in everyday environments. We will record from single units in the inferior colliculus (IC) of awake rabbits in response to complex sounds that incorporate some features of complex acoustic environments.
Specific Aim 1 will study neural responses to naturalistic, dynamic stimuli in simulated reverberant environments appropriate for typical classrooms. We will examine whether low-frequency amplitude modulations in speech can provide glimpses of reliable information that allow accurate neural coding of the source location. We will also test whether IC neurons can adapt to a changing environment in order to optimize the coding of source location and amplitude modulation in reverberation.
Specific Aim 2 will study neural mechanisms for localization and identification of a target sound source in the presence of an interferer. We will test whether temporal cues provided by either amplitude modulation of the source waveform or motion of the target source can improve the neural coding of target location in the presence of the interferer. This research represents a systematic effort to move auditory neurophysiology beyond the study of simple sources in anechoic space and towards natural stimuli in realistic acoustic environments. It addresses fundamental issues in auditory theory such as the neural mechanisms for sound source segregation, compensation for the effects of reverberation and the role of temporal coding in spatial hearing. It may lead to a better, mechanistic understanding of why hearing-impaired and elderly listeners have greater difficulties than normal-hearing listeners understanding speech in the presence of reverberation and competing sounds, and guide the development of new processing strategies for hearing aids and auditory (cochlear and brainstem) implants that perform better in everyday environments.
This proposal investigates the neural mechanisms that mediate the ability of normal-hearing listeners to localize and identify sounds in everyday acoustic environments comprising reverberation and competing sources of sound. Using single-unit recording from the auditory midbrain, our experiments will focus on how dynamic motion and temporal cues interact with neural adaptation to help code target sounds in the presence of interferers and reverberation. This research will lead to new processing strategies for hearing aids and cochlear implants that work better in the everyday acoustic environments most challenging to hearing-impaired listeners.
Showing the most recent 10 out of 30 publications