In typical situations the acoustic signals reaching our ears do not possess the simple and unambiguous directional cues that characterize those produced by a single source in an anechoic environment. Rather, the received signals are most often created from a mixture of direct sound from multiple sources as well as reflections from walls, floors, ceilings, and other surfaces. In order to make sense of the acoustic environment, the signals arriving at the ears must be analyzed to find separate auditory """"""""objects"""""""" arising from different sources, such as a person talking, music from a television, or an air conditioner. To add to the complexity, reflections must not be perceived as separate sources, but instead be integrated with the direct sound and associated with the correct individual source. It is truly remarkable that we are capable of forming and maintaining auditory objects - and localizing them correctly -- in the face of such complex inputs. To succeed in these situations listeners benefit greatly from having two ears separated by the head and with each ear providing input to a central auditory system capable of binaural analysis. Failure of communication can result in consequences ranging from significant long-term issues with classroom learning and achievement, to immediate serious misunderstanding of what is said during emergencies. Most importantly, people with hearing losses appear to have a significantly lower tolerance for such situations, and the prostheses that help in many other ways unfortunately do little to increase this tolerance. We do not have a good understanding of what is involved in successful listening in complex environments, even by normal-hearing listeners. The goal of this project is to make major inroads toward that understanding. The first of three specific aims is to characterize the processes that allow us to form a single perceptual image from sources and reflections. These studies will investigate the hypothesis that fusion is enhanced as the listener constructs an internal model of the acoustic spatial environment.
The second aim i s to discover the auditory mechanisms that determine where this single sound image is localized. Three distinct mechanisms appear to be involved. The goal is to isolate and understand these separate processes and learn how they work together to facilitate localization of everyday sounds.
The third aim i s to understand how the individual, correctly-localized images described above are maintained in the presence of competing sound sources, and how this ability is used in conjunction with knowledge of linguistic context to improve listener performance in complex listening environments.

Public Health Relevance

Among the problems experienced by millions of people with hearing impairment, one of the most common and serious complaints is difficulty understanding sounds in complex environments containing multiple sound sources and reflections. The goal of the proposed research is to identify the auditory mechanisms that allow normal-hearing listeners to succeed in such environments. Discovery of these mechanisms will ultimately lead to improvements in how hearing aids and cochlear implants process sounds to reduce the difficulties experienced by hearing-impaired individuals.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
University of Massachusetts Amherst
Schools of Public Health
United States
Zip Code
Freyman, Richard L; Morse-Fortier, Charlotte; Griffin, Amanda M (2015) Temporal effects in priming of masked and degraded speech. J Acoust Soc Am 138:1418-27
Zobel, Benjamin H; Freyman, Richard L; Sanders, Lisa D (2015) Attention is critical for spatial auditory object formation. Atten Percept Psychophys 77:1998-2010
Ruggles, Dorea R; Freyman, Richard L; Oxenham, Andrew J (2014) Influence of musical training on understanding voiced and whispered speech in noise. PLoS One 9:e86980
Helfer, Karen S; Staub, Adrian (2014) Competing speech perception in older and younger adults: behavioral and eye-movement evidence. Ear Hear 35:161-70
Freyman, Richard L; Griffin, Amanda M; Macmillan, Neil A (2013) Priming of lowpass-filtered speech affects response bias, not sensitivity, in a bandwidth discrimination task. J Acoust Soc Am 134:1183-92
Helfer, Karen S; Mason, Christine R; Marino, Christine (2013) Aging and the perception of temporally interleaved words. Ear Hear 34:160-7
Jones, J Ackland; Freyman, Richard L (2012) Effect of priming on energetic and informational masking in a same-different task. Ear Hear 33:124-33
Freyman, Richard L; Griffin, Amanda M; Oxenham, Andrew J (2012) Intelligibility of whispered speech in stationary and modulated noise maskers. J Acoust Soc Am 132:2514-23
Sanders, Lisa D; Zobel, Benjamin H; Freyman, Richard L et al. (2011) Manipulations of listeners' echo perception are reflected in event-related potentials. J Acoust Soc Am 129:301-9
Freyman, Richard L; Balakrishnan, Uma; Zurek, Patrick M (2010) Lateralization of noise-burst trains based on onset and ongoing interaural delays. J Acoust Soc Am 128:320-31

Showing the most recent 10 out of 31 publications