In realistic, everyday acoustic environments human listeners - regardless of whether they have normal or impaired hearing - depend on the considerable processing power of the human brain to evaluate the auditory scene. The various sources of sound must be parsed, evaluated for relevance and significance, and then attention must be directed to the desired source (the target) while interfering sources (the maskers) are discarded and ignored. Listeners with sensorineural hearing loss (SNHL) - even when wearing hearing aids - often experience extreme difficulty perceptually navigating the auditory scene, severely limiting their ability to communicate effectively. From an acoustic perspective, the designation of a sound source as target versus masker is arbitrary because it depends on the current - and changeable - internal state of the observer. Although the amplification of sounds by hearing aids provides the best (often the only) option for improving communication for listeners with SNHL, current hearing aids inherently fail to solve the source selection problem because they amplify target and masker sounds indiscriminately. The challenge is to devise a hearing aid that focuses only on those sounds the listener chooses to attend and suppresses competing sounds, responding to the wishes of the listener immediately, accurately, and effectively. The goal of the work proposed here is to evaluate a new approach to providing amplification for listeners with hearing loss that is based on the premise that only the listener can make the distinction between which sources to attend and which to ignore. The experiments employ a prototype hearing aid that combines an eye-tracking device with an array of microphones that forms a steerable acoustic beam. By sensing where the eyes are focused, the prototype device can steer the beam of amplification toward the desired source. In that sense, it implements top-down control of focused amplification for the purpose of enhancing sound source selection. The primary goal is to determine the conditions under which top-down control of selective amplification, as implemented by this visually-guided hearing aid (VGHA), can benefit persons with SNHL in complex, dynamic and uncertain listening environments. This goal is to be accomplished under two specific aims that explore 1) hypotheses about top-down control of selective amplification in multitalker sound fields, and 2) the inherent dilemma posed by spatially selective amplification for simultaneously attending to a target source while concurrently monitoring the environment for new sources. A new approach to solving this dilemma will be examined using a dual task. The experimental plan to test these hypotheses employs listeners with hearing loss, matched listeners with normal hearing, and normal-hearing listeners under degraded stimulus conditions simulating the spatial hearing deficits caused by SNHL. Performance under VGHA conditions will be compared and contrasted to that obtained under representative control conditions, and the experimental plan is designed explicitly to take into account the type of masking - energetic versus informational - that is present.

Public Health Relevance

This study is relevant to the public health in that it may lead to a new and better type of hearing aid; thus benefitting many people who suffer from permanent hearing loss. The scientific work tests and refines a prototype 'visually-guided hearing aid' to assist listeners with hearing loss with communication in noisy rooms.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC013286-05
Application #
9455660
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
King, Kelly Anne
Project Start
2014-04-01
Project End
2019-03-31
Budget Start
2018-04-01
Budget End
2019-03-31
Support Year
5
Fiscal Year
2018
Total Cost
Indirect Cost
Name
Boston University
Department
Other Health Professions
Type
Sch Allied Health Professions
DUNS #
049435266
City
Boston
State
MA
Country
United States
Zip Code
Roverud, Elin; Best, Virginia; Mason, Christine R et al. (2018) Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task. Ear Hear 39:756-769
Kidd Jr, Gerald (2017) Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid. J Speech Lang Hear Res 60:3027-3038
Best, Virginia; Mason, Christine R; Swaminathan, Jayaganesh et al. (2017) Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures. J Acoust Soc Am 141:81
Best, Virginia; Roverud, Elin; Streeter, Timothy et al. (2017) The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task. Trends Hear 21:2331216517722304
Best, Virginia; Roverud, Elin; Mason, Christine R et al. (2017) Examination of a hybrid beamformer that preserves auditory spatial cues. J Acoust Soc Am 142:EL369
Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M et al. (2016) Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening. J Neurosci 36:8250-7
Kidd Jr, Gerald; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) Determining the energetic and informational components of speech-on-speech masking. J Acoust Soc Am 140:132
Best, Virginia; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) On the Contribution of Target Audibility to Performance in Spatialized Speech Mixtures. Adv Exp Med Biol 894:83-91
Best, Virginia; Mejia, Jorge; Freeston, Katrina et al. (2015) An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments. Int J Audiol 54:727-35
Kidd Jr, Gerald; Mason, Christine R; Best, Virginia et al. (2015) Benefits of Acoustic Beamforming for Solving the Cocktail Party Problem. Trends Hear 19:

Showing the most recent 10 out of 11 publications