The most common problem reported by people with sensorineural hearing loss is listening in the presence of background noise. The new efforts presented in this proposal will focus on the development, testing, and application of a composite computational model for physiological and psychophysical responses to complex sounds, especially sounds in the presence of background noise. A model that explains both the impressive ability of normal-hearing listeners, and the difficulty of listeners with hearing loss, to hear sounds in noisy environments will be an invaluable tool to better understand and predict listeners'performance in difficult auditory situations. This information can then be used to design new and improved hearing-aid signal-processing strategies that are successful in noisy situations. Previous computational models have successfully described auditory processing at several levels of the auditory system with phenomenological models for the auditory periphery that include cochlear tuning, transduction, and discharge times of individual auditory-nerve fibers. More recent models describe single neurons and neural circuits in the brain stem and mid-brain, including binaural interactions and neural amplitude-modulation processing. Computational models of neural population responses have also been developed to predict the performance of listeners with and without hearing loss in basic psychophysical tasks. In the proposed project, experience with these models will be leveraged to develop a novel, composite model that ties together these different levels of processing, providing a tool for studying the interactions of stimulus cues and neural mechanisms along the auditory pathway. This computational model for monaural and binaural processing of complex sounds will be tested and refined using physiological recordings from the midbrain (inferior colliculus) of awake rabbit and psychophysical tests in human listeners. The model will be used to predict existing psychophysical data for masked detection, both with and without binaural cues, by listeners with normal hearing. These psychophysical studies will be extended to include listeners with sensorineural hearing loss. Finally, the new model will predict performance of listeners with and without hearing loss on a masked amplitude-modulation (AM) detection task using reproducible modulation maskers. Physiological tuning for amplitude-modulation frequency first emerges at the level of the midbrain, the highest level of the proposed model. Thus this task will allow direct comparison between physiological aspects of AM processing at the mid-brain and psychophysical performance. This proposal provides a systematic transition from modeling basic physiological responses to predicting performance of listeners with and without hearing loss in psychophysical detection tasks in the audio- and modulation-frequency domains. The long term goal of this research program is to develop a robust tool for the development and testing of novel signal-processing strategies for listeners with hearing loss.

Public Health Relevance

The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss. We will build a computational model for the auditory system of listeners with and without sensorineural hearing loss. This model will be used to predict performance of listeners for detection in noisy situations. Because hearing loss typically involves difficulty understanding complex sounds, especially in noise, knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Rochester
Biomedical Engineering
Schools of Dentistry
United States
Zip Code
Zuk, Nathaniel J; Carney, Laurel H; Lalor, Edmund C (2018) Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat. Front Neurosci 12:349
Carney, Laurel H (2018) Supra-Threshold Hearing and Fluctuation Profiles: Implications for Sensorineural and Hidden Hearing Loss. J Assoc Res Otolaryngol 19:331-352
Carney, Laurel H (2018) Special issue on computational models of hearing. Hear Res 360:1-2
Salimi, Nima; Zilany, Muhammad S A; Carney, Laurel H (2017) Modeling Responses in the Superior Paraolivary Nucleus: Implications for Forward Masking in the Inferior Colliculus. J Assoc Res Otolaryngol 18:441-456
Carney, Laurel H; Kim, Duck O; Kuwada, Shigeyuki (2016) Speech Coding in the Midbrain: Effects of Sensorineural Hearing Loss. Adv Exp Med Biol 894:427-435
Mao, Junwen; Carney, Laurel H (2015) Tone-in-noise detection using envelope cues: comparison of signal-processing-based and physiological models. J Assoc Res Otolaryngol 16:121-33
Mao, Junwen; Koch, Kelly-Jo; Doherty, Karen A et al. (2015) Cues for Diotic and Dichotic Detection of a 500-Hz Tone in Noise Vary with Hearing Loss. J Assoc Res Otolaryngol 16:507-21
Kuwada, Shigeyuki; Kim, Duck O; Koch, Kelly-Jo et al. (2015) Near-field discrimination of sound source distance in the rabbit. J Assoc Res Otolaryngol 16:255-62
Kim, Duck O; Zahorik, Pavel; Carney, Laurel H et al. (2015) Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue. J Neurosci 35:5360-72
Rao, Akshay; Carney, Laurel H (2014) Speech enhancement for listeners with hearing loss based on a model for vowel coding in the auditory midbrain. IEEE Trans Biomed Eng 61:2081-91

Showing the most recent 10 out of 12 publications