Hearing loss is one of the most prevalent chronic conditions, affecting 37.5 million Americans. Although signal amplification in modern hearing aids makes sound more audible to hearing impaired listeners, speech understanding in background interference remains the biggest challenge by hearing aid wearers. The proposed research seeks a monaural (one-microphone) solution to this challenge by developing supervised speech segregation based on deep learning. Unlike traditional speech enhancement, deep learning based speech segregation is driven by training data, and three components of a deep neural network (DNN) model are features, training targets, and network architectures. Recently, deep learning has achieved tremendous successes in a variety of real world applications. Our approach builds on the progress made in the PI's previous R01 project which demonstrated, for the first time, substantial speech intelligibility improvements for hearing-impaired listeners in noise. A main focus of the proposed work in this cycle is to combat room reverberation in addition to background interference. The proposed work is designed to achieve three specific aims.
The first aim i s to improve intelligibility of reverberant-noisy speech for hearing- impaired listeners. To achieve this aim, we will train DNNs to perform time-frequency masking.
The second aim i s to improve intelligibility of reverberant speech in the presence of competing speech. To achieve this aim, we will perform DNN training to estimate two ideal masks, one for the target talker and the other for the interfering talker.
The third aim i s to improve intelligibility of reverberant speech in combined speech and nonspeech interference. To achieve this aim, we will develop a two-stage DNN model where the first stage will be trained to remove nonspeech interference and the second stage to remove interfering speech. Eight speech intelligibility experiments involving both hearing-impaired and normal-hearing listeners will be conducted to systematically evaluate the developed system. The proposed project is expected to substantially close the speech intelligibility gap between hearing-impaired and normal-hearing listeners in daily conditions, with the ultimate goal of removing the gap altogether.

Public Health Relevance

A widely acknowledged deficit of hearing loss is reduced intelligibility of reverberant-noisy speech. How to improve speech intelligibility of hearing impaired listeners in everyday environments is a major technical challenge. This project directly addresses this challenge and the results from the project are expected to yield technical methods that can be translated to hearing prosthesis, potentially benefiting millions of individuals with hearing loss.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
2R01DC012048-06
Application #
9443223
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
2013-01-01
Project End
2022-12-31
Budget Start
2018-01-08
Budget End
2018-12-31
Support Year
6
Fiscal Year
2018
Total Cost
Indirect Cost
Name
Ohio State University
Department
Biostatistics & Other Math Sci
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
832127323
City
Columbus
State
OH
Country
United States
Zip Code
43210
Zhang, Xueliang; Wang, DeLiang (2017) Deep Learning Based Binaural Speech Separation in Reverberant Environments. IEEE/ACM Trans Audio Speech Lang Process 25:1075-1084
Healy, Eric W; Delfarah, Masood; Vasko, Jordan L et al. (2017) An algorithm to increase intelligibility for hearing-impaired listeners in the presence of a competing talker. J Acoust Soc Am 141:4230
Williamson, Donald S; Wang, Yuxuan; Wang, DeLiang (2016) Complex Ratio Masking for Monaural Speech Separation. IEEE/ACM Trans Audio Speech Lang Process 24:483-492
Chen, Jitong; Wang, Yuxuan; Wang, DeLiang (2016) Noise Perturbation for Supervised Speech Separation. Speech Commun 78:1-10
Zhang, Xiao-Lei; Wang, DeLiang (2016) A Deep Ensemble Learning Method for Monaural Speech Separation. IEEE/ACM Trans Audio Speech Lang Process 24:967-977
Chen, Jitong; Wang, Yuxuan; Yoho, Sarah E et al. (2016) Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises. J Acoust Soc Am 139:2604
Williamson, Donald S; Wang, Yuxuan; Wang, DeLiang (2015) Estimating nonnegative matrix model activations with deep neural networks to increase perceptual speech quality. J Acoust Soc Am 138:1399-407
Narayanan, Arun; Wang, DeLiang (2015) Improving Robustness of Deep Neural Network Acoustic Models via Speech Separation and Joint Adaptive Training. IEEE/ACM Trans Audio Speech Lang Process 23:92-101
Healy, Eric W; Yoho, Sarah E; Chen, Jitong et al. (2015) An algorithm to increase speech intelligibility for hearing-impaired listeners in novel segments of the same noise type. J Acoust Soc Am 138:1660-9
Williamson, Donald S; Wang, Yuxuan; Wang, DeLiang (2014) Reconstruction techniques for improving the perceptual quality of binary masked speech. J Acoust Soc Am 136:892-902

Showing the most recent 10 out of 14 publications