In typical situations the acoustic signals reaching our ears do not possess the simple and unambiguous directional cues that characterize those produced by a single source in an anechoic environment. Rather, the received signals are most often created from a mixture of direct sound from multiple sources as well as reflections from walls, floors, ceilings, and other surfaces. In order to make sense of the acoustic environment, the signals arriving at the ears must be analyzed to find separate auditory """"""""objects"""""""" arising from different sources, such as a person talking, music from a television, or an air conditioner. To add to the complexity, reflections must not be perceived as separate sources, but instead be integrated with the direct sound and associated with the correct individual source. It is truly remarkable that we are capable of forming and maintaining auditory objects - and localizing them correctly -- in the face of such complex inputs. To succeed in these situations listeners benefit greatly from having two ears separated by the head and with each ear providing input to a central auditory system capable of binaural analysis. Failure of communication can result in consequences ranging from significant long-term issues with classroom learning and achievement, to immediate serious misunderstanding of what is said during emergencies. Most importantly, people with hearing losses appear to have a significantly lower tolerance for such situations, and the prostheses that help in many other ways unfortunately do little to increase this tolerance. We do not have a good understanding of what is involved in successful listening in complex environments, even by normal-hearing listeners. The goal of this project is to make major inroads toward that understanding. The first of three specific aims is to characterize the processes that allow us to form a single perceptual image from sources and reflections. These studies will investigate the hypothesis that fusion is enhanced as the listener constructs an internal model of the acoustic spatial environment.
The second aim i s to discover the auditory mechanisms that determine where this single sound image is localized. Three distinct mechanisms appear to be involved. The goal is to isolate and understand these separate processes and learn how they work together to facilitate localization of everyday sounds.
The third aim i s to understand how the individual, correctly-localized images described above are maintained in the presence of competing sound sources, and how this ability is used in conjunction with knowledge of linguistic context to improve listener performance in complex listening environments.

Public Health Relevance

Among the problems experienced by millions of people with hearing impairment, one of the most common and serious complaints is difficulty understanding sounds in complex environments containing multiple sound sources and reflections. The goal of the proposed research is to identify the auditory mechanisms that allow normal-hearing listeners to succeed in such environments. Discovery of these mechanisms will ultimately lead to improvements in how hearing aids and cochlear implants process sounds to reduce the difficulties experienced by hearing-impaired individuals.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC001625-18
Application #
8471541
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
1992-07-01
Project End
2015-05-31
Budget Start
2013-06-01
Budget End
2014-05-31
Support Year
18
Fiscal Year
2013
Total Cost
$314,822
Indirect Cost
$112,947
Name
University of Massachusetts Amherst
Department
Psychology
Type
Schools of Public Health
DUNS #
153926712
City
Amherst
State
MA
Country
United States
Zip Code
01003
Freyman, Richard L; Morse-Fortier, Charlotte; Griffin, Amanda M et al. (2018) Can monaural temporal masking explain the ongoing precedence effect? J Acoust Soc Am 143:EL133
Freyman, Richard L; Terpening, Jenna; Costanzi, Angela C et al. (2017) The Effect of Aging and Priming on Same/Different Judgments Between Text and Partially Masked Speech. Ear Hear 38:672-680
Freyman, Richard L; Zurek, Patrick M (2017) Strength of onset and ongoing cues in judgments of lateral position. J Acoust Soc Am 142:206
Morse-Fortier, Charlotte; Parrish, Mary M; Baran, Jane A et al. (2017) The Effects of Musical Training on Speech Detection in the Presence of Informational and Energetic Masking. Trends Hear 21:2331216517739427
Helfer, Karen S; Freyman, Richard L (2016) Age equivalence in the benefit of repetition for speech understanding. J Acoust Soc Am 140:EL371
Helfer, Karen S; Merchant, Gabrielle R; Freyman, Richard L (2016) Aging and the effect of target-masker alignment. J Acoust Soc Am 140:3844
Zobel, Benjamin H; Freyman, Richard L; Sanders, Lisa D (2015) Attention is critical for spatial auditory object formation. Atten Percept Psychophys 77:1998-2010
Freyman, Richard L; Morse-Fortier, Charlotte; Griffin, Amanda M (2015) Temporal effects in priming of masked and degraded speech. J Acoust Soc Am 138:1418-27
Helfer, Karen S; Staub, Adrian (2014) Competing speech perception in older and younger adults: behavioral and eye-movement evidence. Ear Hear 35:161-70
Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J (2014) Influence of musical training on understanding voiced and whispered speech in noise. PLoS One 9:e86980

Showing the most recent 10 out of 37 publications