The goal is to improve the performance of hearing aids in noisy environments. This will be done by developing a computational method for processing sound mixtures to enhance a target signal in a mixture that includes that signal and additional noise or competing speech sounds. The result of the processing will be a waveform whose physical signal-to-noise ratio (SNR) has been increased. It is expected that when this processed signal is used as the input to a hearing aid or implantable prosthesis, listeners with hearing loss will have less difficulty identifying the signal. Engineering approaches have been applied to this problem in the past, but with limited success. The innovative approach to be used here is to design a processor that duplicates some of the processing carried out by neurons in the auditory pathway. The PI has measured the responses of neurons to sounds that are perceived by normal listeners as a single sound or as mixtures of two sounds (""""""""double sounds""""""""). Waveforms that are perceived as double elicit stereotypical complex temporal discharge patterns in central auditory neurons. A computational model devised by the PI can duplicate the fine temporal details of those discharge patterns. Studies based on the computational model have led to specific hypotheses about the encoding of double sounds across populations of neurons. The proposed experiments will generate computational methods for decoding the responses of populations of neurons. After the decoding step, the original sound mixture has been separated into two parts. The part that corresponds to the target signal can then be used to synthesize a new sound. It is hypothesized that in this re-synthesized sound, the important properties of the target signal will have been retained, while the properties of the competing background will have been rejected or diminished. The hypothesis will be tested with psychophysical speech-identification experiments. The identification of signals in the presence of noise will be measured. The sounds will be processed, and identification will be re-measured. Studies with normal-hearing listeners will be conducted first, and parameters of the model will be refined based on those results. Subsequent experiments will measure the identification of speech in noise, with and without processing, in listeners with hearing loss. It is projected that by the end of the two-year period, the processing algorithm will have been improved to the point at which it could be incorporated into commercial hearing devices.
Hearing impaired listeners have great difficulty understanding speech in noisy environments. That is largely because they cannot segregate simultaneous sounds as effectively as listeners with no hearing loss can. Hearing aids and cochlear implants do not restore the ability to process speech in noise. The proposed project will lead to the development of a signal processing strategy for hearing aids or implants that extracts speech from noise before it is delivered to the prosthetic device. This will at least partially restore the ability to segregate simultaneous sounds.
|Ahmadi, Mahnaz; Gross, Vauna L; Sinex, Donal G (2013) Perceptual learning for speech in noise after application of binary time-frequency masks. J Acoust Soc Am 133:1687-92|
|Sinex, Donal G (2013) Recognition of speech in noise after application of time-frequency masks: dependence on frequency and threshold parameters. J Acoust Soc Am 133:2390-6|