Human listeners depend on the sense of hearing to communicate effectively in everyday social situations. Although we rely heavily on our ability to selectively attend to a single voice in a noisy background, and to follow transitions in talkers during conversation, this task is quite complex and accomplishing it successfully depends on the integrity of processing at a number of physiological sites spanning the auditory periphery to the brain. It is well known that hearing loss may adversely affect a listener's ability to perceptually segregate one talker in the midst of other talkers and to understand that talker's spoken message (i.e., the cocktail party problem; see Middlebrooks et al., 2017, for a series of recent reviews). The most common remedy for sensorineural hearing loss (SNHL) is a hearing aid, or a pair of aids, that can boost sounds to audible levels while preserving comfortable loudness and may improve signal-to-noise ratio for certain classes of sounds via noise reduction. However, even when listeners with SNHL wear hearing aids they often still experience extreme difficulty perceptually navigating the auditory scene, severely limiting their ability to communicate effectively. One reason is that, from an acoustic perspective, the designation of a particular sound source as target versus masker is arbitrary because it depends on the current - and changeable - internal state of the observer. Thus, the distinction between a target talker to be attended and a masker talker to be ignored can only be made by the listener and may change from moment to moment. Although the amplification of sounds by hearing aids provides the best (often the only) option for improving communication for listeners with SNHL, current hearing aids inherently fail to solve the source selection problem because they amplify target and masker sounds indiscriminately without the ability to distinguish which source the listener has chosen as the target. Thus the challenge is to devise a hearing aid that focuses only on those sounds the listener chooses to attend and suppresses competing sounds, responding to the wishes of the listener immediately, accurately, and effectively. During the past award period, our work has demonstrated that acoustic beamforming implemented by a head worn microphone array can provide a significant advantage for listeners with SNHL in solving the cocktail party problem. Furthermore, we have found that steering the beam of amplification can be accomplished quickly and effectively by sensing eye gaze with an eye tracker and directing the acoustic look direction (ALD) of the beam accordingly. The present application requests support to continue work on this visually guided hearing aid (VGHA) and to further examine the scientific premise upon which it is based. The overall goals are to better understand how top-down control of selective amplification provides assistance to listeners with SNHL in typical social situations, to advance our understanding of auditory and auditory-visual selective attention, and to extend the potential benefits of the VGHA to new populations of listeners - users of bilateral cochlear implants and persons with aphasia - who typically experience great difficulty understanding speech in complex, multiple-talker communication situations.
The goal of this work is to design a better hearing aid. It allows the listener to select which sound source to amplify simply by looking at it and is called the 'visually guided hearing aid.' The aid depends on the coordinated actions of hearing and vision and how they work together to selectively attend to specific sounds such as one human voice in a background of noise or other voices.
Roverud, Elin; Best, Virginia; Mason, Christine R et al. (2018) Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task. Ear Hear 39:756-769 |
Kidd Jr, Gerald (2017) Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid. J Speech Lang Hear Res 60:3027-3038 |
Best, Virginia; Mason, Christine R; Swaminathan, Jayaganesh et al. (2017) Use of a glimpsing model to understand the performance of listeners with and without hearing loss in spatialized speech mixtures. J Acoust Soc Am 141:81 |
Best, Virginia; Roverud, Elin; Streeter, Timothy et al. (2017) The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task. Trends Hear 21:2331216517722304 |
Best, Virginia; Roverud, Elin; Mason, Christine R et al. (2017) Examination of a hybrid beamformer that preserves auditory spatial cues. J Acoust Soc Am 142:EL369 |
Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M et al. (2016) Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening. J Neurosci 36:8250-7 |
Kidd Jr, Gerald; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) Determining the energetic and informational components of speech-on-speech masking. J Acoust Soc Am 140:132 |
Best, Virginia; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) On the Contribution of Target Audibility to Performance in Spatialized Speech Mixtures. Adv Exp Med Biol 894:83-91 |
Kidd Jr, Gerald; Mason, Christine R; Best, Virginia et al. (2015) Benefits of Acoustic Beamforming for Solving the Cocktail Party Problem. Trends Hear 19: |
Best, Virginia; Mejia, Jorge; Freeston, Katrina et al. (2015) An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments. Int J Audiol 54:727-35 |
Showing the most recent 10 out of 11 publications