Cochlear implants (CIs) successfully deliver auditory information to the severely hearing impaired and the profoundly deaf. However, they do not succeed in noisy backgrounds, or in the presence of competing speakers. While research has shown that the brain can learn to adapt to adverse listening conditions using top-down processing, learning and top-down mechanisms can only help to a limited extent when the information transmitted to the auditory system is severely degraded. The long-term goal of our work is to uncover basic mechanisms underlying CI listeners'auditory sensitivity, and to relate these findings to their speech perception with the device. It is increasingly apparent that, to improve sound quality, speaker recognition, source segregation, and listening in noise, CIs will need to provide fine-grained information about the speech spectrum. It is unclear, however, to what extent CI listeners can process such information in a multi-channel stimulation context. Proposed experiments will quantify multi-channel spectral pattern recognition, multi-channel temporal pattern recognition, and multi-channel spectro-temporal interactions in CI listeners in the presence of competing, fluctuating maskers. Novel stimulation methods, such as current-steered virtual channel stimulation, will be explored, as will effects of more focused stimulation mode. Results will provide a view of the role of across-channel interactions in complex, dynamic electrical stimuli. In parallel, measures of phoneme recognition, gender recognition, and speech intonation recognition will be made, both in quiet and in competing noise. A comparison of these results to the results obtained from the psychophysical experiments will yield insight into those aspects of electrical stimulation that are critical for speech recognition in realistic listening situations. The results will not only yield important fundamental understanding of complex auditory processing by CI listeners, but they will also apply, in a broader sense, to auditory processing of complex stimuli by hearing-impaired and normal hearing listeners. The clinical relevance of the proposed work is that the findings will contribute to the development of clinical assessment tools and new speech processor designs for future auditory prostheses.
Showing the most recent 10 out of 31 publications