Hearing loss is one of the most prevalent chronic conditions, affecting 37.5 million Americans. Although signal amplification in modern hearing aids makes sound more audible to hearing impaired listeners, speech understanding in background interference remains the biggest challenge by hearing aid wearers. The proposed research seeks a monaural (one-microphone) solution to this challenge by developing supervised speech segregation based on deep learning. Unlike traditional speech enhancement, deep learning based speech segregation is driven by training data, and three components of a deep neural network (DNN) model are features, training targets, and network architectures. Recently, deep learning has achieved tremendous successes in a variety of real world applications. Our approach builds on the progress made in the PI's previous R01 project which demonstrated, for the first time, substantial speech intelligibility improvements for hearing-impaired listeners in noise. A main focus of the proposed work in this cycle is to combat room reverberation in addition to background interference. The proposed work is designed to achieve three specific aims.
The first aim i s to improve intelligibility of reverberant-noisy speech for hearing- impaired listeners. To achieve this aim, we will train DNNs to perform time-frequency masking.
The second aim i s to improve intelligibility of reverberant speech in the presence of competing speech. To achieve this aim, we will perform DNN training to estimate two ideal masks, one for the target talker and the other for the interfering talker.
The third aim i s to improve intelligibility of reverberant speech in combined speech and nonspeech interference. To achieve this aim, we will develop a two-stage DNN model where the first stage will be trained to remove nonspeech interference and the second stage to remove interfering speech. Eight speech intelligibility experiments involving both hearing-impaired and normal-hearing listeners will be conducted to systematically evaluate the developed system. The proposed project is expected to substantially close the speech intelligibility gap between hearing-impaired and normal-hearing listeners in daily conditions, with the ultimate goal of removing the gap altogether.

Public Health Relevance

A widely acknowledged deficit of hearing loss is reduced intelligibility of reverberant-noisy speech. How to improve speech intelligibility of hearing impaired listeners in everyday environments is a major technical challenge. This project directly addresses this challenge and the results from the project are expected to yield technical methods that can be translated to hearing prosthesis, potentially benefiting millions of individuals with hearing loss.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC012048-08
Application #
9831633
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
2013-01-01
Project End
2022-12-31
Budget Start
2020-01-01
Budget End
2020-12-31
Support Year
8
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Ohio State University
Department
Biostatistics & Other Math Sci
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
832127323
City
Columbus
State
OH
Country
United States
Zip Code
43210
Zhao, Yan; Wang, DeLiang; Johnson, Eric M et al. (2018) A deep learning based segregation algorithm to increase speech intelligibility for hearing-impaired listeners in reverberant-noisy conditions. J Acoust Soc Am 144:1627
Chen, Jitong; Wang, DeLiang (2017) Long short-term memory for speaker generalization in supervised speech separation. J Acoust Soc Am 141:4705
Liu, Yuzhou; Wang, DeLiang (2017) Speaker-dependent multipitch tracking using deep neural networks. J Acoust Soc Am 141:710
Zhang, Xueliang; Wang, DeLiang (2017) Deep Learning Based Binaural Speech Separation in Reverberant Environments. IEEE/ACM Trans Audio Speech Lang Process 25:1075-1084
Healy, Eric W; Delfarah, Masood; Vasko, Jordan L et al. (2017) An algorithm to increase intelligibility for hearing-impaired listeners in the presence of a competing talker. J Acoust Soc Am 141:4230
Wang, DeLiang (2017) Deep Learning Reinvents the Hearing Aid: Finally, wearers of hearing aids can pick out a voice in a crowded room. IEEE Spectr 54:32-37
Williamson, Donald S; Wang, Yuxuan; Wang, DeLiang (2016) Complex Ratio Masking for Monaural Speech Separation. IEEE/ACM Trans Audio Speech Lang Process 24:483-492
Chen, Jitong; Wang, Yuxuan; Wang, DeLiang (2016) Noise Perturbation for Supervised Speech Separation. Speech Commun 78:1-10
Zhang, Xiao-Lei; Wang, DeLiang (2016) A Deep Ensemble Learning Method for Monaural Speech Separation. IEEE/ACM Trans Audio Speech Lang Process 24:967-977
Chen, Jitong; Wang, Yuxuan; Yoho, Sarah E et al. (2016) Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises. J Acoust Soc Am 139:2604

Showing the most recent 10 out of 18 publications