This proposal presents plans to develop and test a new model for the processing of acoustic cues in both psychophysical tasks and real-world hearing. Masking paradigms are typically interpreted in the context of two models: The power-spectrum model is based on energy in the responses of one or more band-pass filters that represent peripheral tuning. The envelope-power-spectrum model is based on the responses of a bank of modulation filters. These popular models, however, fail to explain robust performance in a number of psychophysical tasks, especially roving- or equalized-level, and roving- or equalized-envelope-energy tasks. The continued use of these models is largely due to a lack of viable alternatives. Here, we propose a new, alternative model for masked detection and spectral coding that provides a mechanistic explanation for a number of psychophysical results, for listeners with or without hearing loss. Building upon our recent studies of envelope-related cues in masked detection, our proposal focuses on the role of neural-fluctuation cues in the responses of auditory-nerve fibers, and ultimately on how these cues are represented by modulation-tuned neurons in the midbrain. These cues are robust in the healthy ear but, because they are strongly dependent upon peripheral nonlinearities, they are substantially degraded in most common types of hearing loss. We will make detailed measurements on the use of envelope vs. energy cues by individual listeners as a function of frequency and hearing thresholds. These results will provide individualized models that will be used to predict thresholds in specific masking and discrimination tasks. We will use computational, physiological and psychophysical tools to test a diotic model of masked detection, focusing on two classic paradigms: notched-noise and forward-masking tasks. These psychophysical tools have been used extensively to characterize tuning bandwidth, compression, and temporal processing in listeners with and without hearing loss. We will re-examine these tasks with neural fluctuation- based representations. Our preliminary results show that the contrast in fluctuations across peripheral channels establishes a representation of stimulus features at the level of the midbrain that is robust in noise across a wide range of levels, thus addressing the primary challenges of roving-parameter paradigms. These cues are particularly strong near spectral slopes, and thus warrant consideration for other stimulus features with sharp spectral slopes, such as fricative consonants and pinna cues. We therefore also propose to extend our dichotic model based on interaural differences in neural fluctuations to the spectral slopes of pinna cues, which code sound location and externalization. Our preliminary work indicates that neural-fluctuation cues associated with the diotic and dichotic stimuli occur in the modulation frequency range where the majority of midbrain neurons are tuned. Consideration of these tasks and stimuli in the framework of neural-fluctuation cues provides a novel and general understanding for coding stimulus spectra by the normal and impaired ear.

Public Health Relevance

Hearing loss typically involves difficulty understanding complex sounds such as speech, especially in noise. Knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss. The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss. We are developing and testing a computational model for the auditory system of listeners with and without sensorineural hearing loss.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Rochester
Biomedical Engineering
School of Medicine & Dentistry
United States
Zip Code
Carney, Laurel H (2018) Special issue on computational models of hearing. Hear Res 360:1-2
Zuk, Nathaniel J; Carney, Laurel H; Lalor, Edmund C (2018) Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat. Front Neurosci 12:349
Carney, Laurel H (2018) Supra-Threshold Hearing and Fluctuation Profiles: Implications for Sensorineural and Hidden Hearing Loss. J Assoc Res Otolaryngol 19:331-352
Salimi, Nima; Zilany, Muhammad S A; Carney, Laurel H (2017) Modeling Responses in the Superior Paraolivary Nucleus: Implications for Forward Masking in the Inferior Colliculus. J Assoc Res Otolaryngol 18:441-456
Carney, Laurel H; Kim, Duck O; Kuwada, Shigeyuki (2016) Speech Coding in the Midbrain: Effects of Sensorineural Hearing Loss. Adv Exp Med Biol 894:427-435
Mao, Junwen; Carney, Laurel H (2015) Tone-in-noise detection using envelope cues: comparison of signal-processing-based and physiological models. J Assoc Res Otolaryngol 16:121-33
Mao, Junwen; Koch, Kelly-Jo; Doherty, Karen A et al. (2015) Cues for Diotic and Dichotic Detection of a 500-Hz Tone in Noise Vary with Hearing Loss. J Assoc Res Otolaryngol 16:507-21
Kuwada, Shigeyuki; Kim, Duck O; Koch, Kelly-Jo et al. (2015) Near-field discrimination of sound source distance in the rabbit. J Assoc Res Otolaryngol 16:255-62
Kim, Duck O; Zahorik, Pavel; Carney, Laurel H et al. (2015) Auditory distance coding in rabbit midbrain neurons and human perception: monaural amplitude modulation depth as a cue. J Neurosci 35:5360-72
Mao, Junwen; Carney, Laurel H (2014) Binaural detection with narrowband and wideband reproducible noise maskers. IV. Models using interaural time, level, and envelope differences. J Acoust Soc Am 135:824-37

Showing the most recent 10 out of 12 publications