Everyday listening environments often comprise noise and acoustic reflections off surrounding surfaces, together known as reverberation. While normal-hearing listeners can converse in such environments with little effort, those with hearing loss find them challenging. Despite the ubiquity of reverberation, little is known about the neural mechanisms for robust listening in reverberation and even less is known about the combined effects of noise and reverberation on neural coding. The proposed studies will, for the first time, investigate how speech-like stimuli presented in realistic reverberation are encoded in the auditory midbrain of an unanesthetized rabbit model. The auditory midbrain is known to play a key role in the precedence effect, which helps accurate sound localization in reverberation.
Three specific aims will use stimuli of increasing realism to address fundamental questions. Because modulations of the amplitude envelope are essential for speech reception, Specific Aim 1 will focus on amplitude modulated (AM) stimuli. We will test the hypothesis that a previously identified ?reverberant advantage? in the temporal coding of AM sounds occurs because the neurons sensitivity to AM is enhanced by their sensitivity to the fluctuations in interaural coherence that necessarily accompany AM in reverberation. If verified, this hypothesis would highlight the importance of preserving dynamic binaural cues in hearing aids.
Specific Aim 2 will directly test how natural speech is coded by midbrain auditory neurons in various amounts of reverberation. We will use modern techniques for stimulus reconstruction from neural population activity to quantitatively assess the robustness of neural coding in reverberation and test whether adaptation to the statistics of the reverberant environment contributes to robust coding.
Specific Aim 3 will extend the Aim 2 results to the common case when reverberation is combined with background noise. Using an innovative technique to independently control the amount of reverberation in a speech target and a noise interferer, we will disentangle how the neural coding of the target is affected by each. We will determine how the degradation in the interferer binaural cues caused by reverberation reduces the benefit of spatial separation between target and interferer for neural coding of the target. Together, these studies will increase fundamental understanding of the neural mechanisms for robust neural coding in reverberation. The findings will guide the development of new processing strategies for hearing aids and cochlear implants that would improve speech reception in everyday challenging environments.

Public Health Relevance

Hearing impaired listeners struggle to understand speech in everyday reverberant environments, especially when reverberation is combined with noise, yet little is known about neural mechanisms that allow normal hearing listeners to understand speech in reverberation. The proposed studies will, for the first time, characterize how reverberation affects the neural coding of speech in the auditory midbrain, alone and combined with noise, in reverberant environments. The findings will be essential for improving the way sound is processed in reverberant environment for individuals with hearing loss.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
2R01DC002258-26
Application #
10049967
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
1995-01-01
Project End
2025-06-30
Budget Start
2020-07-01
Budget End
2021-06-30
Support Year
26
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Massachusetts Eye and Ear Infirmary
Department
Type
DUNS #
073825945
City
Boston
State
MA
Country
United States
Zip Code
02114
Zuk, Nathaniel; Delgutte, Bertrand (2017) Neural coding of time-varying interaural time differences and time-varying amplitude in the inferior colliculus. J Neurophysiol 118:544-563
Day, Mitchell L; Delgutte, Bertrand (2016) Neural population encoding and decoding of sound source location across sound level in the rabbit inferior colliculus. J Neurophysiol 115:193-207
Slama, Michaƫl C C; Delgutte, Bertrand (2015) Neural coding of sound envelope in reverberant environments. J Neurosci 35:4452-68
Wang, Le; Devore, Sasha; Delgutte, Bertrand et al. (2014) Dual sensitivity of inferior colliculus neurons to ITD in the envelopes of high-frequency sounds: experimental and modeling study. J Neurophysiol 111:164-81
Day, Mitchell L; Delgutte, Bertrand (2013) Neural correlates of the perception of sound source separation. Adv Exp Med Biol 787:255-62
Day, Mitchell L; Delgutte, Bertrand (2013) Decoding sound source location and separation using neural population activity patterns. J Neurosci 33:15837-47
Wen, Bo; Wang, Grace I; Dean, Isabel et al. (2012) Time course of dynamic range adaptation in the auditory nerve. J Neurophysiol 108:69-82
Day, Mitchell L; Koka, Kanthaiah; Delgutte, Bertrand (2012) Neural encoding of sound source location in the presence of a concurrent, spatially separated source. J Neurophysiol 108:2612-28
Wang, Grace I; Delgutte, Bertrand (2012) Sensitivity of cochlear nucleus neurons to spatio-temporal changes in auditory nerve activity. J Neurophysiol 108:3172-95
Plourde, Eric; Delgutte, Bertrand; Brown, Emery N (2011) A point process model for auditory neurons considering both their intrinsic dynamics and the spectrotemporal properties of an extrinsic signal. IEEE Trans Biomed Eng 58:1507-10

Showing the most recent 10 out of 30 publications