In natural situations, the sound environment is dynamically and rapidly changing, with multiple sources overlapping in time and competing for attention. The ability to listen to your friend talking while walking down a noisy city street requires brain mechanisms that disentangle the sound mixture, separating your friend's voice from the sounds of the cars and other passing conversations. Auditory scene analysis, the ability to parse and organize the mixture of sound input, is a fundamental auditory process. Yet, the neural mechanisms that subserve perceptual sound organization are still poorly understood, and often rely on theories developed primarily to describe mechanisms of the visual system. Deficits in the ability to select relevant information when there are multiple competing sound sources is a common complaint in aging and in individuals with hearing loss, and can greatly hinder communication ability. There is no computer algorithm or prosthetic device that can mimic what the brain does when there is competing background noise. The overall goal of this proposal is to characterize how the auditory system adapts to dynamically changing multi-stream environments, allowing rapid and flexible shifting to different sound events in one's surroundings.
Specific Aim 1 determines how ambiguous input is physiologically stored.
Specific Aim 2 characterizes how attention modifies neural activity to support behavior in ambiguous situations.
Specific Aim 3 identifies how neural representations of auditory input accommodate to changing multi-stream environments.
The aims will be accomplished by obtaining behavioral and multiple electrophysiological indices of sound organization when the input is perceptually ambiguous, thus providing a novel model in normal hearing adults for characterizing how the brain maintains stable sound events in noisy environments. A key strength of the current project is the ability to neurophysiologically assess sound organization in auditory cortex for both attended and unattended sounds. The results of the proposed experiments will elucidate how dynamically changing environments are maintained by brain systems;characterizing how automatic and attentive mechanisms of scene analysis interact in the perception of one among many streams. This will fill a profound gap in our understanding of the neural mechanisms contributing to the perception of stable auditory events in complex and dynamically changing sound environments.

Public Health Relevance

Impairments to peripheral or central auditory mechanisms severely impact the ability to identify distinct sound events or select relevant sound streams, especially in situations with multiple, competing sound sources. Results of this proposal will significantly advance our understanding of auditory perception in more natural listening situations based on models of competing attention, and provide a basis for developing new hypotheses to assess disordered listening that is central to numerous developmental disabilities, hearing loss, schizophrenia, and aging. Understanding how the brain segregates and integrates sounds in the complex environment holds important implications for the development of medical technologies (such as for designing hearing aids, prosthetic devices, and computer models of speech perception that deal with competing background sounds).

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC004263-12
Application #
8545146
Study Section
Special Emphasis Panel (ZRG1-IFCN-Q (02))
Program Officer
Donahue, Amy
Project Start
1999-12-01
Project End
2017-08-31
Budget Start
2013-09-01
Budget End
2014-08-31
Support Year
12
Fiscal Year
2013
Total Cost
$337,131
Indirect Cost
$135,256
Name
Albert Einstein College of Medicine
Department
Neurosciences
Type
Schools of Medicine
DUNS #
110521739
City
Bronx
State
NY
Country
United States
Zip Code
10461
Symonds, Renée M; Lee, Wei Wei; Kohn, Adam et al. (2016) Distinguishing Neural Adaptation and Predictive Coding Hypotheses in Auditory Change Detection. Brain Topogr :
Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie et al. (2016) Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment. J Am Acad Audiol 27:489-97
Sussman, E; Steinschneider, M; Lee, W et al. (2015) Auditory scene analysis in school-aged children with developmental language disorders. Int J Psychophysiol 95:113-24
Hisagi, Miwako; Shafer, Valerie L; Strange, Winifred et al. (2015) Neural measures of a Japanese consonant length discrimination by Japanese and American English listeners: Effects of attention. Brain Res 1626:218-31
Miller, Tova; Chen, Sufen; Lee, Wei Wei et al. (2015) Multitasking: Effects of processing multiple auditory feature patterns. Psychophysiology 52:1140-8
Pannese, Alessia; Herrmann, Christoph S; Sussman, Elyse (2015) Analyzing the auditory scene: neurophysiologic evidence of a dissociation between detection of regularity and detection of change. Brain Topogr 28:411-22
Rankin, James; Sussman, Elyse; Rinzel, John (2015) Neuromechanistic Model of Auditory Bistability. PLoS Comput Biol 11:e1004555
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David (2015) The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective. Int J Psychophysiol 95:175-83
Max, Caroline; Widmann, Andreas; Schröger, Erich et al. (2015) Effects of explicit knowledge and predictability on auditory distraction and target performance. Int J Psychophysiol 98:174-81
Sussman, E S; Chen, S; Sussman-Fort, J et al. (2014) The five myths of MMN: redefining how to use MMN in basic and clinical research. Brain Topogr 27:553-64

Showing the most recent 10 out of 49 publications