This proposal presents plans to develop and test a new model for the processing of acoustic cues in both psychophysical tasks and real-world hearing. Masking paradigms are typically interpreted in the context of two models: The power-spectrum model is based on energy in the responses of one or more band-pass filters that represent peripheral tuning. The envelope-power-spectrum model is based on the responses of a bank of modulation filters. These popular models, however, fail to explain robust performance in a number of psychophysical tasks, especially roving- or equalized-level, and roving- or equalized-envelope-energy tasks. The continued use of these models is largely due to a lack of viable alternatives. Here, we propose a new, alternative model for masked detection and spectral coding that provides a mechanistic explanation for a number of psychophysical results, for listeners with or without hearing loss. Building upon our recent studies of envelope-related cues in masked detection, our proposal focuses on the role of neural-fluctuation cues in the responses of auditory-nerve fibers, and ultimately on how these cues are represented by modulation-tuned neurons in the midbrain. These cues are robust in the healthy ear but, because they are strongly dependent upon peripheral nonlinearities, they are substantially degraded in most common types of hearing loss. We will make detailed measurements on the use of envelope vs. energy cues by individual listeners as a function of frequency and hearing thresholds. These results will provide individualized models that will be used to predict thresholds in specific masking and discrimination tasks. We will use computational, physiological and psychophysical tools to test a diotic model of masked detection, focusing on two classic paradigms: notched-noise and forward-masking tasks. These psychophysical tools have been used extensively to characterize tuning bandwidth, compression, and temporal processing in listeners with and without hearing loss. We will re-examine these tasks with neural fluctuation- based representations. Our preliminary results show that the contrast in fluctuations across peripheral channels establishes a representation of stimulus features at the level of the midbrain that is robust in noise across a wide range of levels, thus addressing the primary challenges of roving-parameter paradigms. These cues are particularly strong near spectral slopes, and thus warrant consideration for other stimulus features with sharp spectral slopes, such as fricative consonants and pinna cues. We therefore also propose to extend our dichotic model based on interaural differences in neural fluctuations to the spectral slopes of pinna cues, which code sound location and externalization. Our preliminary work indicates that neural-fluctuation cues associated with the diotic and dichotic stimuli occur in the modulation frequency range where the majority of midbrain neurons are tuned. Consideration of these tasks and stimuli in the framework of neural-fluctuation cues provides a novel and general understanding for coding stimulus spectra by the normal and impaired ear.
Hearing loss typically involves difficulty understanding complex sounds such as speech, especially in noise. Knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss. The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss. We are developing and testing a computational model for the auditory system of listeners with and without sensorineural hearing loss.
Showing the most recent 10 out of 12 publications