The long-term goal of this research is to explain how the auditory system analyzes complex acoustic """"""""scenes"""""""", in which multiple sound """"""""streams"""""""" (e.g., concurrent voices) interact and compete for attention. This project will investigate the influence of attention on neural responses to streams of simple and complex sounds. The approach involves a combination of simultaneous single-unit recordings and behavioral tasks in ferrets, and psychophysical measures in humans. Neural responses to sound streams will be measured in different fields of the ferret auditory cortex (spanning both primary or """"""""core"""""""" and secondary or """"""""belt"""""""" areas) under different attentional conditions, including both global and selective attention. The animal's attentional state will be manipulated using behavioral tasks inspired by psychophysical measures of auditory stream perception in humans. These tasks are designed in such a way that correct performance in them requires that the animal be able to attend selectively to one of the sound streams present, or globally to the whole auditory stimulus, or to a visual stimulus. By comparing neural responses under different attentional conditions, it will be possible to assess how attention influences neural responses to auditory streams in different parts of the auditory cortex and, more generally, whether and how neural representations of auditory streams are formed and processes in primary and secondary areas of auditory cortex. These neurophysiological and behavioral measures in ferrets will be paralleled and extended by psychoacoustical measures in humans, using the same or different stimuli and tasks. This will be done, firstly, for stimuli with simple spectral and temporal characteristics, i.e., pure-tone sequences (Specific Aim 1), in order to bridge the gap with earlier studies of the neural basis of auditory streaming. In the second half of the proposal (Specific Aim 2), this approach will be extended to spectrally complex tones, which are more ecologically relevant. This research has direct and significant health implications, because one of the most common complaints of hearing-impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams. A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective-listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in hearing aids, cochlear implants, and automatic speech recognition devices.

Public Health Relevance

The research described in this proposal will lead to a better understanding of the brain mechanisms that underlie the ability of normal-hearing listeners to tease apart, and follow selectively, concurrent sound streams, such as voices. This is directly relevant to the public- health issue of hearing impairment, because one of the most common complaints of hearing- impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams (such as other voices). A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective- listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in auditory prostheses, such as hearing aids and cochlear implants.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC007657-07
Application #
8277242
Study Section
Special Emphasis Panel (ZRG1-IFCN-B (02))
Program Officer
Miller, Roger
Project Start
2005-02-01
Project End
2016-05-31
Budget Start
2012-06-01
Budget End
2013-05-31
Support Year
7
Fiscal Year
2012
Total Cost
$410,558
Indirect Cost
$80,683
Name
University of Maryland College Park
Department
Engineering (All Types)
Type
Schools of Engineering
DUNS #
790934285
City
College Park
State
MD
Country
United States
Zip Code
20742
Oxenham, Andrew J (2018) How We Hear: The Perception and Neural Coding of Sound. Annu Rev Psychol 69:27-50
David, Marion; Tausend, Alexis N; Strelcyk, Olaf et al. (2018) Effect of age and hearing loss on auditory stream segregation of speech sounds. Hear Res 364:118-128
Chambers, Claire; Akram, Sahar; Adam, Vincent et al. (2017) Prior context in audition informs binding and shapes simple features. Nat Commun 8:15027
David, Marion; Lavandier, Mathieu; Grimault, Nicolas et al. (2017) Discrimination and streaming of speech sounds based on differences in interaural and spectral cues. J Acoust Soc Am 142:1674
Mehta, Anahita H; Jacoby, Nori; Yasin, Ifat et al. (2017) An auditory illusion reveals the role of streaming in the temporal misallocation of perceptual objects. Philos Trans R Soc Lond B Biol Sci 372:
David, Marion; Lavandier, Mathieu; Grimault, Nicolas et al. (2017) Sequential stream segregation of voiced and unvoiced speech sounds based on fundamental frequency. Hear Res 344:235-243
Lu, Kai; Xu, Yanbo; Yin, Pingbo et al. (2017) Temporal coherence structure rapidly shapes neuronal interactions. Nat Commun 8:13900
Mehta, Anahita H; Yasin, Ifat; Oxenham, Andrew J et al. (2016) Neural correlates of attention and streaming in a perceptually multistable auditory illusion. J Acoust Soc Am 140:2225
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z et al. (2016) Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling. Neuroimage 124:906-917
O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C (2015) Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening. J Neurosci 35:7256-63

Showing the most recent 10 out of 49 publications