The goal of this proposal is to establish a new model for masked detection and frequency resolution, applicable to listeners with normal hearing and hearing loss, based on realistic physiological response properties. We are developing a new, fundamental framework for neural representations of acoustic stimuli that can predict a wide range of psychoacoustic phenomena. This framework is focused on neural fluctuations of auditory-nerve (AN) fibers, rather than on energy, average rates, or phase-locking to temporal fine structure. Neural fluctuations (NFs) refer to the relatively slow changes over time in AN responses (i.e., changes with rates ranging from 10s to a few 100 Hz). Neural fluctuations in this frequency range are of interest because they strongly excite, or suppress, neurons in the auditory central nervous system. The NF model is based on known nonlinear properties of inner-hair-cell and AN responses, and thus has important implications for interpreting masking results in listeners with sensorineural hearing loss. A representation of masked sounds based on the NF model is an alternative to the commonly accepted excitation-pattern representation provided by the power spectrum model of masking. The NF model successfully describes basic masking thresholds, as well as many experimental paradigms for which the power-spectrum (or energy) model fails. The NF model is not limited to low frequencies, as are models based on phase-locking to temporal fine structure. Here, the NF framework will be applied not only to masking paradigms, but also to stimulus paradigms that focus on frequency resolution, such as discrimination of the fundamental frequency of harmonic complex tones, or detection of increments in profile-analysis stimuli. Current models for the representation of these stimuli rely on a conceptual peripheral filter bank with critical bandwidths, estimated from human masking results using the power spectrum model of masking. Critical bandwidths, assumed to limit the frequency resolution of the auditory representations of complex sounds, are not consistent with known physiology. In contrast, frequency resolution according to the NF model is grounded on physiologically realistic response properties of AN fibers and sensitivity to neural fluctuations observed in the midbrain. Finally, to explain perception based on NF cues across the entire range of audible sound levels, we will extend our AN model to include NF-driven feedback gain control, guided by the known physiology and anatomy of the medial olivocochlear efferent system. The studies proposed here include: i) computational modeling to predict human thresholds, including re-examination of classical datasets that can, and those that cannot, be explained by the power-spectrum model, ii) related physiological studies in the midbrain, where cells are strongly sensitive to fluctuating inputs, and iii) new psychophysical studies designed to challenge the NF model, in listeners with normal hearing and those with sensorineural hearing loss.

Public Health Relevance

Hearing loss typically involves difficulty understanding sounds, especially in backgrounds of noise. We are developing and testing a new framework for describing the way that sounds are represented in the responses of neurons. Knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss. The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
2R01DC010813-11
Application #
10048351
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
2010-04-01
Project End
2025-11-30
Budget Start
2020-12-01
Budget End
2021-11-30
Support Year
11
Fiscal Year
2021
Total Cost
Indirect Cost
Name
University of Rochester
Department
Biomedical Engineering
Type
School of Medicine & Dentistry
DUNS #
041294109
City
Rochester
State
NY
Country
United States
Zip Code
14627
Zuk, Nathaniel J; Carney, Laurel H; Lalor, Edmund C (2018) Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat. Front Neurosci 12:349
Carney, Laurel H (2018) Supra-Threshold Hearing and Fluctuation Profiles: Implications for Sensorineural and Hidden Hearing Loss. J Assoc Res Otolaryngol 19:331-352
Carney, Laurel H (2018) Special issue on computational models of hearing. Hear Res 360:1-2
Salimi, Nima; Zilany, Muhammad S A; Carney, Laurel H (2017) Modeling Responses in the Superior Paraolivary Nucleus: Implications for Forward Masking in the Inferior Colliculus. J Assoc Res Otolaryngol 18:441-456
Carney, Laurel H; Kim, Duck O; Kuwada, Shigeyuki (2016) Speech Coding in the Midbrain: Effects of Sensorineural Hearing Loss. Adv Exp Med Biol 894:427-435
Mao, Junwen; Carney, Laurel H (2015) Tone-in-noise detection using envelope cues: comparison of signal-processing-based and physiological models. J Assoc Res Otolaryngol 16:121-33
Mao, Junwen; Koch, Kelly-Jo; Doherty, Karen A et al. (2015) Cues for Diotic and Dichotic Detection of a 500-Hz Tone in Noise Vary with Hearing Loss. J Assoc Res Otolaryngol 16:507-21
Kuwada, Shigeyuki; Kim, Duck O; Koch, Kelly-Jo et al. (2015) Near-field discrimination of sound source distance in the rabbit. J Assoc Res Otolaryngol 16:255-62
Kim, Duck O; Zahorik, Pavel; Carney, Laurel H et al. (2015) Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue. J Neurosci 35:5360-72
Rao, Akshay; Carney, Laurel H (2014) Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain. IEEE Trans Biomed Eng 61:2081-91

Showing the most recent 10 out of 12 publications