The goal of this research is to explain how the auditory system analyzes complex acoustic scenes, in which multiple sound ?streams? (e.g., concurrent voices) interact and compete for attention. Synergy is created by combining 1) simultaneous behavioral and neurophysiological measures from primary and secondary auditory cortex in ferrets with 2) comparable behavioral and electroencephalographic (EEG) measures in humans within 3) the theoretical and computational framework of the temporal coherence hypothesis.
Specific Aim 1 involves recordings of single-unit cortical responses while ferrets segregate speech mixtures and detect the presence of a target word spoken by the target talker. The experiments will be paralleled by tests of human streaming of speech sounds that are comprised of natural mixtures of voiced (harmonic) and unvoiced (noise-like) sounds.
Specific Aim 2 explores the role of coherence and attention in stream binding and segregation using stimuli with simpler spectral characteristics, e.g., pure-tones and noise sequences. The goal is to test strong predictions of the temporal coherence hypothesis with respect to the effects of temporally synchronous, alternating, or overlapping sound sequences.
Specific Aim 3 extends this approach to higher-level attributes of more complex stimuli, such as pitch and timbre, which are more ecologically relevant. Here we examine the role of pitch and timbre in characterizing the continuity of a stream, and binding the elements of the stream in both ferrets and humans. This research has direct and significant health implications, because one of the most common complaints of hearing-impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams. A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective-listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in hearing aids, cochlear implants, and automatic speech recognition devices.

Public Health Relevance

The research described in this proposal should lead to a better understanding of the brain mechanisms that underlie the ability of people with normal-hearing to tease apart, and follow selectively, concurrent sound streams, such as voices. This is directly relevant to the public- health issue of hearing impairment, because one of the most common complaints of hearing- impaired individuals (including wearers of hearing aids or cochlear implants) is that they find it difficult to separate concurrent streams of sounds, and to attend selectively to one of these streams (such as someone's voice) among other streams (such as other voices). A clearer understanding of the mechanisms underlying the perceptual ability to separate, and attend to, auditory streams will likely lead to a clearer understanding of the origin of these selective- listening difficulties, and it may inspire the design of more effective sound-separation algorithms for use in auditory prostheses, such as hearing aids and cochlear implants, as well as in automatic speech recognition systems.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC016119-02
Application #
9685893
Study Section
Auditory System Study Section (AUD)
Program Officer
Miller, Roger
Project Start
2018-05-01
Project End
2023-04-30
Budget Start
2019-05-01
Budget End
2020-04-30
Support Year
2
Fiscal Year
2019
Total Cost
Indirect Cost
Name
University of Maryland College Park
Department
Engineering (All Types)
Type
Biomed Engr/Col Engr/Engr Sta
DUNS #
790934285
City
College Park
State
MD
Country
United States
Zip Code
20742