The healthy ear is able to detect and identify signals in a noisy environment better than any manmade algorithm or system. However, mild-to-moderate sensorineural hearing loss causes significant difficulty for listeners in noisy environments. We are combining behavioral, physiological, and computational-modeling approaches to study neural processing of complex sounds. The efforts of recent years have focused on masking, and results from all three approaches have led to the development of a cross-frequency coincidence-detection model of masked detection at low frequencies. We refer to this model as the phase-opponency model - it takes advantage of the relative times of discharge of AN fibers tuned to different frequencies. This model provides a physiologically realistic alternative to the classical power-spectrum (or """"""""energy"""""""") model for masking and resolves fundamental problems associated with that model. We propose to test hypotheses, suggested by the phase-opponency model regarding masked detection of tones in noise at low frequencies and to extend our studies to high frequencies and to other masking paradigms. Throughout these studies, we will continue to combine behavioral experiments in human and rabbit, physiological recordings in awake rabbit and anesthetized gerbil, and computational modeling. We will also continue to use modeling tools developed during the last few years that not only capture the detailed properties of healthy and impaired auditory-nerve fibers and central auditory neurons, but that also allow quantitative comparison of performance predicted on the basis of population models to actual measurements of behavioral performance. The long-term goal of this effort is to provide novel signal-processing strategies to aid the hearing impaired, based on the unique neural strategies for processing signals in noise.
Showing the most recent 10 out of 36 publications