The purpose of this work is to obtain a better understanding of the communication difficulties experienced by listeners with sensorineural hearing loss in complex, multisource acoustic environments. The basic premise upon which this research is based is that much of this difficulty is due to an interaction between peripheral hearing loss and more centrally-based processes responsible for source segregation, focused and divided attention, and working memory. On a theoretical level, our view is that the competition between sound sources may be characterized according to a distinction between two basic mechanisms of masking: energetic masking, which is primarily due to overlapping patterns of excitation in the auditory periphery;and informational masking, which results from the limitations on processing at later stages in the auditory nervous system and brain. This distinction is pervasive in auditory tasks affecting """"""""simple"""""""" detection, discrimination and identification, and speech recognition. On a general level, both peripheral and central factors in masking affect the formation, maintenance, and processing of sequences of related auditory events, or """"""""streams"""""""" and a theme throughout this work is to understand more fully the processing of sequential information. The approach taken here is to attempt to evaluate the influences of energetic and informational masking on performance in a variety of tasks placing demands at different levels oaf the auditory system. The long-range goal is to develop an integrated theory of auditory masking that accounts for energetic and informational masking generally and successfully predicts the consequences of cochlear hearing loss.
This work addresses the common problem of hearing loss and its effects on communication in group situations. Although hearing loss, whether assisted by hearing aids or not, may have minimal impact on communication when talking one-on-one in a quiet setting, it is often devastating when talking to one or more persons in noisy group situations, such as meetings, parties, social functions, etc. Our work examines the effects of hearing loss in group situations with particular emphasis on how hearing loss stresses cognitive processes such as attention and memory and the interference and distraction caused by noise.
|Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash et al. (2016) Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians. PLoS One 11:e0157638|
|Roverud, Elin; Best, Virginia; Mason, Christine R et al. (2016) Informational Masking in Normal-Hearing and Hearing-Impaired Listeners Measured in a Nonspeech Pattern Identification Task. Trends Hear 20:|
|Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M et al. (2016) Role of Binaural Temporal Fine Structure and Envelope Cues in Cocktail-Party Listening. J Neurosci 36:8250-7|
|Best, Virginia; Keidser, Gitte; Buchholz, JÓ§rg M et al. (2016) Development and preliminary evaluation of a new test of ongoing speech comprehension. Int J Audiol 55:45-52|
|Best, Virginia; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) On the Contribution of Target Audibility to Performance in Spatialized Speech Mixtures. Adv Exp Med Biol 894:83-91|
|Kidd Jr, Gerald; Mason, Christine R; Swaminathan, Jayaganesh et al. (2016) Determining the energetic and informational components of speech-on-speech masking. J Acoust Soc Am 140:132|
|Kidd Jr, Gerald; Mason, Christine R; Best, Virginia et al. (2015) Benefits of Acoustic Beamforming for Solving the Cocktail Party Problem. Trends Hear 19:|
|Best, Virginia; Mason, Christine R; Kidd Jr, Gerald et al. (2015) Better-ear glimpsing in hearing-impaired listeners. J Acoust Soc Am 137:EL213-9|
|Swaminathan, Jayaganesh; Mason, Christine R; Streeter, Timothy M et al. (2015) Musical training, individual differences and the cocktail party problem. Sci Rep 5:11628|
|Best, Virginia; Mejia, Jorge; Freeston, Katrina et al. (2015) An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments. Int J Audiol 54:727-35|
Showing the most recent 10 out of 51 publications