The goal of this proposal is to establish a new model for masked detection and frequency resolution, applicable to listeners with normal hearing and hearing loss, based on realistic physiological response properties. We are developing a new, fundamental framework for neural representations of acoustic stimuli that can predict a wide range of psychoacoustic phenomena. This framework is focused on neural fluctuations of auditory-nerve (AN) fibers, rather than on energy, average rates, or phase-locking to temporal fine structure. Neural fluctuations (NFs) refer to the relatively slow changes over time in AN responses (i.e., changes with rates ranging from 10s to a few 100 Hz). Neural fluctuations in this frequency range are of interest because they strongly excite, or suppress, neurons in the auditory central nervous system. The NF model is based on known nonlinear properties of inner-hair-cell and AN responses, and thus has important implications for interpreting masking results in listeners with sensorineural hearing loss. A representation of masked sounds based on the NF model is an alternative to the commonly accepted excitation-pattern representation provided by the power spectrum model of masking. The NF model successfully describes basic masking thresholds, as well as many experimental paradigms for which the power-spectrum (or energy) model fails. The NF model is not limited to low frequencies, as are models based on phase-locking to temporal fine structure. Here, the NF framework will be applied not only to masking paradigms, but also to stimulus paradigms that focus on frequency resolution, such as discrimination of the fundamental frequency of harmonic complex tones, or detection of increments in profile-analysis stimuli. Current models for the representation of these stimuli rely on a conceptual peripheral filter bank with critical bandwidths, estimated from human masking results using the power spectrum model of masking. Critical bandwidths, assumed to limit the frequency resolution of the auditory representations of complex sounds, are not consistent with known physiology. In contrast, frequency resolution according to the NF model is grounded on physiologically realistic response properties of AN fibers and sensitivity to neural fluctuations observed in the midbrain. Finally, to explain perception based on NF cues across the entire range of audible sound levels, we will extend our AN model to include NF-driven feedback gain control, guided by the known physiology and anatomy of the medial olivocochlear efferent system. The studies proposed here include: i) computational modeling to predict human thresholds, including re-examination of classical datasets that can, and those that cannot, be explained by the power-spectrum model, ii) related physiological studies in the midbrain, where cells are strongly sensitive to fluctuating inputs, and iii) new psychophysical studies designed to challenge the NF model, in listeners with normal hearing and those with sensorineural hearing loss.
Hearing loss typically involves difficulty understanding sounds, especially in backgrounds of noise. We are developing and testing a new framework for describing the way that sounds are represented in the responses of neurons. Knowledge of how the healthy brain copes with difficult listening environments will provide new and important insights for aiding listeners with hearing loss. The Public Health Relevance of this project is to develop a better understanding of the difficulties in noisy situations for listeners with hearing loss.
Showing the most recent 10 out of 12 publications