In typical situations the acoustic signals reaching our ears do not possess the simple and unambiguous directional cues that characterize those produced by a single source in an anechoic environment. Rather, the received signals are most often created from a mixture of direct sound from multiple sources as well as reflections from walls, floors, ceilings, and other surfaces. In order to make sense of the acoustic environment, the signals arriving at the ears must be analyzed to find separate auditory "objects" arising from different sources, such as a person talking, music from a television, or an air conditioner. To add to the complexity, reflections must not be perceived as separate sources, but instead be integrated with the direct sound and associated with the correct individual source. It is truly remarkable that we are capable of forming and maintaining auditory objects - and localizing them correctly -- in the face of such complex inputs. To succeed in these situations listeners benefit greatly from having two ears separated by the head and with each ear providing input to a central auditory system capable of binaural analysis. Failure of communication can result in consequences ranging from significant long-term issues with classroom learning and achievement, to immediate serious misunderstanding of what is said during emergencies. Most importantly, people with hearing losses appear to have a significantly lower tolerance for such situations, and the prostheses that help in many other ways unfortunately do little to increase this tolerance. We do not have a good understanding of what is involved in successful listening in complex environments, even by normal-hearing listeners. The goal of this project is to make major inroads toward that understanding. The first of three specific aims is to characterize the processes that allow us to form a single perceptual image from sources and reflections. These studies will investigate the hypothesis that fusion is enhanced as the listener constructs an internal model of the acoustic spatial environment.
The second aim i s to discover the auditory mechanisms that determine where this single sound image is localized. Three distinct mechanisms appear to be involved. The goal is to isolate and understand these separate processes and learn how they work together to facilitate localization of everyday sounds.
The third aim i s to understand how the individual, correctly-localized images described above are maintained in the presence of competing sound sources, and how this ability is used in conjunction with knowledge of linguistic context to improve listener performance in complex listening environments.

Public Health Relevance

Among the problems experienced by millions of people with hearing impairment, one of the most common and serious complaints is difficulty understanding sounds in complex environments containing multiple sound sources and reflections. The goal of the proposed research is to identify the auditory mechanisms that allow normal-hearing listeners to succeed in such environments. Discovery of these mechanisms will ultimately lead to improvements in how hearing aids and cochlear implants process sounds to reduce the difficulties experienced by hearing-impaired individuals.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC001625-17
Application #
8290217
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
1992-07-01
Project End
2015-05-31
Budget Start
2012-06-01
Budget End
2013-05-31
Support Year
17
Fiscal Year
2012
Total Cost
$331,699
Indirect Cost
$119,199
Name
University of Massachusetts Amherst
Department
Psychology
Type
Schools of Public Health
DUNS #
153926712
City
Amherst
State
MA
Country
United States
Zip Code
01003
Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J (2014) Influence of musical training on understanding voiced and whispered speech in noise. PLoS One 9:e86980
Helfer, Karen S; Staub, Adrian (2014) Competing speech perception in older and younger adults: behavioral and eye-movement evidence. Ear Hear 35:161-70
Freyman, Richard L; Griffin, Amanda M; Macmillan, Neil A (2013) Priming of lowpass-filtered speech affects response bias, not sensitivity, in a bandwidth discrimination task. J Acoust Soc Am 134:1183-92
Helfer, Karen S; Mason, Christine R; Marino, Christine (2013) Aging and the perception of temporally interleaved words. Ear Hear 34:160-7
Jones, J Ackland; Freyman, Richard L (2012) Effect of priming on energetic and informational masking in a same-different task. Ear Hear 33:124-33
Sanders, Lisa D; Zobel, Benjamin H; Freyman, Richard L et al. (2011) Manipulations of listeners' echo perception are reflected in event-related potentials. J Acoust Soc Am 129:301-9
Freyman, Richard L; Balakrishnan, Uma; Zurek, Patrick M (2010) Lateralization of noise-burst trains based on onset and ongoing interaural delays. J Acoust Soc Am 128:320-31
Helfer, Karen S; Vargo, Megan (2009) Speech recognition and temporal processing in middle-aged women. J Am Acad Audiol 20:264-71
Keen, Rachel; Freyman, Richard L (2009) Release and re-buildup of listeners'models of auditory space. J Acoust Soc Am 125:3243-52
Helfer, Karen S; Freyman, Richard L (2008) Aging and speech-on-speech masking. Ear Hear 29:87-98

Showing the most recent 10 out of 28 publications