The long-term goal of this research is to explain how the auditory system analyzes complex acoustic scenes, in which multiple sound streams (e.g., concurrent voices) interact and compete for attention. This project will investigate the influence of attention on neural responses to streams of simple and complex sounds. The approach involves a combination of simultaneous single-unit recordings and behavioral tasks in ferrets, and psychophysical measures in humans. Neural responses to sound streams will be measured in different fields of the ferret auditory cortex (spanning both primary or core and secondary or belt areas) under different attentional conditions, including both global and selective attention. The animal's attentional state will be manipulated using behavioral tasks inspired by psychophysical measures of auditory stream perception in humans. These tasks are designed in such a way that correct performance in them requires that the animal be able to attend selectively to one of the sound streams present, or globally to the whole auditory stimulus, or to a visual stimulus. By comparing neural responses under different attentional conditions, it will be possible to assess how attention influences neural responses to auditory streams in different parts of the auditory cortex and, more generally, whether and how neural representations of auditory streams are formed and processes in primary and secondary areas of auditory cortex. These neurophysiological and behavioral measures in ferrets will be paralleled and extended by psychoacoustical measures in humans, using the same or different stimuli and tasks. This will be done, firstly, for stimuli with simple spectral and temporal characteristics, i.e., pure-tone sequences (Specific Aim 1), in order to bridge the gap with earlier studies of the neural basis of auditory streaming. In the second half of the proposal (Specific Aim 2), this approach will be extended to spectrally complex tones, which are more ecologically relevant. This research has direct and significant health implications, because one of the most common complaints of hearing-impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams. A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective-listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in hearing aids, cochlear implants, and automatic speech recognition devices.
The research described in this proposal will lead to a better understanding of the brain mechanisms that underlie the ability of normal-hearing listeners to tease apart, and follow selectively, concurrent sound streams, such as voices. This is directly relevant to the public- health issue of hearing impairment, because one of the most common complaints of hearing- impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams (such as other voices). A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective- listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in auditory prostheses, such as hearing aids and cochlear implants.
Showing the most recent 10 out of 49 publications