In natural situations, the sound environment is dynamically and rapidly changing, with multiple sources overlapping in time and competing for attention. The ability to listen to your friend talking while walking down a noisy city street requires brain mechanisms that disentangle the sound mixture, separating your friend's voice from the sounds of the cars and other passing conversations. Auditory scene analysis, the ability to parse and organize the mixture of sound input, is a fundamental auditory process. Yet, the neural mechanisms that subserve perceptual sound organization are still poorly understood, and often rely on theories developed primarily to describe mechanisms of the visual system. Deficits in the ability to select relevant information when there are multiple competing sound sources is a common complaint in aging and in individuals with hearing loss, and can greatly hinder communication ability. There is no computer algorithm or prosthetic device that can mimic what the brain does when there is competing background noise. The overall goal of this proposal is to characterize how the auditory system adapts to dynamically changing multi-stream environments, allowing rapid and flexible shifting to different sound events in one's surroundings.
Specific Aim 1 determines how ambiguous input is physiologically stored.
Specific Aim 2 characterizes how attention modifies neural activity to support behavior in ambiguous situations.
Specific Aim 3 identifies how neural representations of auditory input accommodate to changing multi-stream environments.
The aims will be accomplished by obtaining behavioral and multiple electrophysiological indices of sound organization when the input is perceptually ambiguous, thus providing a novel model in normal hearing adults for characterizing how the brain maintains stable sound events in noisy environments. A key strength of the current project is the ability to neurophysiologically assess sound organization in auditory cortex for both attended and unattended sounds. The results of the proposed experiments will elucidate how dynamically changing environments are maintained by brain systems;characterizing how automatic and attentive mechanisms of scene analysis interact in the perception of one among many streams. This will fill a profound gap in our understanding of the neural mechanisms contributing to the perception of stable auditory events in complex and dynamically changing sound environments.

Public Health Relevance

Public Health Relevance: Impairments to peripheral or central auditory mechanisms severely impact the ability to identify distinct sound events or select relevant sound streams, especially in situations with multiple, competing sound sources. Results of this proposal will significantly advance our understanding of auditory perception in more natural listening situations based on models of competing attention, and provide a basis for developing new hypotheses to assess disordered listening that is central to numerous developmental disabilities, hearing loss, schizophrenia, and aging. Understanding how the brain segregates and integrates sounds in the complex environment holds important implications for the development of medical technologies (such as for designing hearing aids, prosthetic devices, and computer models of speech perception that deal with competing background sounds).

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Special Emphasis Panel (ZRG1-IFCN-Q (02))
Program Officer
Donahue, Amy
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
Albert Einstein College of Medicine
Schools of Medicine
United States
Zip Code
Yu, Yan H; Shafer, Valerie L; Sussman, Elyse S (2018) The Duration of Auditory Sensory Memory for Vowel Processing: Neurophysiological and Behavioral Measures. Front Psychol 9:335
Ruhnau, Philipp; Schröger, Erich; Sussman, Elyse S (2017) Implicit expectations influence target detection in children and adults. Dev Sci 20:
Symonds, Renée M; Lee, Wei Wei; Kohn, Adam et al. (2017) Distinguishing Neural Adaptation and Predictive Coding Hypotheses in Auditory Change Detection. Brain Topogr 30:136-148
Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles (2017) Selective entrainment of brain oscillations drives auditory perceptual organization. Neuroimage 159:195-206
Dinces, Elizabeth; Sussman, Elyse S (2017) Attentional Resources Are Needed for Auditory Stream Segregation in Aging. Front Aging Neurosci 9:414
Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie et al. (2016) Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment. J Am Acad Audiol 27:489-97
Rankin, James; Sussman, Elyse; Rinzel, John (2015) Neuromechanistic Model of Auditory Bistability. PLoS Comput Biol 11:e1004555
Hisagi, Miwako; Shafer, Valerie L; Strange, Winifred et al. (2015) Neural measures of a Japanese consonant length discrimination by Japanese and American English listeners: Effects of attention. Brain Res 1626:218-31
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David (2015) The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective. Int J Psychophysiol 95:175-83
Max, Caroline; Widmann, Andreas; Schröger, Erich et al. (2015) Effects of explicit knowledge and predictability on auditory distraction and target performance. Int J Psychophysiol 98:174-81

Showing the most recent 10 out of 57 publications