This collaborative project combines psychoacoustic and physiological investigations of a fundamental perceptual component of auditory scene analysis known as auditory streaming. This phenomenon manifests itself in the everyday ability of humans and animals to parse complex acoustic information arising from multiple sound sources into meaningful auditory """"""""streams"""""""". For instance, the ability to listen to someone at a cocktail party or to follow a violin in the orchestra both seem to rely on the ability to form auditory streams. While seemingly effortless, the neural mechanisms underlying the ability to form auditory streams remain a mystery. Consequently, the few attempts at developing models of auditory stream segregation remain highly speculative, and lack the physiological data to support their formulations. The primary objective of the proposed research is to explore streaming of complex sounds in humans, and to investigate the neural mechanisms that underlie this ability in animals. Until recently, the investigation of the neural mechanisms of streaming in non-human species has been hampered by the difficulty of assessing subjective perceptual phenomena like streaming without relying on introspection and language. Building upon recent progress in the area, this project overcomes this difficulty by including as a key ingredient the development and usage of specially designed stimuli and psychoacoustic tasks to induce, manipulate, and objectively assess streaming in both animals and humans. Furthermore, these stimuli and tasks are designed in such a way that valuable physiological data can be collected simultaneously with task performance in animals. Furthermore, the proposed research rigorously investigates the hypothesis that streaming of a complex sound from a cluttered acoustic environment can be associated with segregation of spectral and temporal features found at a higher level of representation. Specifically, these features are inspired by physiological and psychoacoustic studies' of spectrotemporal analysis in the auditory cortex. ? ? ?

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
1R01DC007657-01A1
Application #
7089279
Study Section
Auditory System Study Section (AUD)
Program Officer
Donahue, Amy
Project Start
2006-05-01
Project End
2011-04-30
Budget Start
2006-05-01
Budget End
2007-04-30
Support Year
1
Fiscal Year
2006
Total Cost
$323,276
Indirect Cost
Name
University of Maryland College Park
Department
Engineering (All Types)
Type
Schools of Engineering
DUNS #
790934285
City
College Park
State
MD
Country
United States
Zip Code
20742
Oxenham, Andrew J (2018) How We Hear: The Perception and Neural Coding of Sound. Annu Rev Psychol 69:27-50
David, Marion; Tausend, Alexis N; Strelcyk, Olaf et al. (2018) Effect of age and hearing loss on auditory stream segregation of speech sounds. Hear Res 364:118-128
Chambers, Claire; Akram, Sahar; Adam, Vincent et al. (2017) Prior context in audition informs binding and shapes simple features. Nat Commun 8:15027
David, Marion; Lavandier, Mathieu; Grimault, Nicolas et al. (2017) Discrimination and streaming of speech sounds based on differences in interaural and spectral cues. J Acoust Soc Am 142:1674
Mehta, Anahita H; Jacoby, Nori; Yasin, Ifat et al. (2017) An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects. Philos Trans R Soc Lond B Biol Sci 372:
David, Marion; Lavandier, Mathieu; Grimault, Nicolas et al. (2017) Sequential stream segregation of voiced and unvoiced speech sounds based on fundamental frequency. Hear Res 344:235-243
Lu, Kai; Xu, Yanbo; Yin, Pingbo et al. (2017) Temporal coherence structure rapidly shapes neuronal interactions. Nat Commun 8:13900
Mehta, Anahita H; Yasin, Ifat; Oxenham, Andrew J et al. (2016) Neural correlates of attention and streaming in a perceptually multistable auditory illusion. J Acoust Soc Am 140:2225
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z et al. (2016) Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling. Neuroimage 124:906-917
O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C (2015) Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening. J Neurosci 35:7256-63

Showing the most recent 10 out of 49 publications