This proposal advances our understanding of how the brain separates concurrent sounds such as multiple voices. Concurrent-sound segregation is a basic perceptual ability, and is necessary for successful speech perception in complex acoustic environments. Deficits in this ability are major complaints in hearing-impaired (HI) listeners. Neural mechanisms underlying concurrent-sound segregation are poorly understood. Characterizing these mechanisms is a prerequisite for more effectively alleviating listening difficulties in the HI. One of the most powerful cues for the segregation of concurrent harmonic sounds is a difference in their fundamental frequencies (F0s). The F0s of concurrent sounds can theoretically be extracted using 'spectral'or 'temporal'pattern-matching mechanisms, which operate on neural representations of harmonic structure (rate- place profiles) or F0-related periodicities (periodicity-place profiles), respectively. While studies have begun examining these profiles at peripheral and subcortical levels, it is unknown whether neural representations of simultaneous harmonic sounds at the level of auditory cortex (AC) contain sufficient information to enable their perceptual segregation. To meet this informational need, we will combine neurophysiological recordings in primary and non-primary AC of awake, behaving monkeys with computational models to test the following hypotheses (Specific Aim 1): (1) neural representations of concurrent complex tones contain sufficient information to reliably infer their respective F0s and enable their perceptual segregation based on F0 differences;(2) the salience of these representations is increased by introducing additional sound-segregation cues (differences in onset time, level, or spatial location). While results of Specific Aim 1 will characterize neural representations of 'generic'concurrent harmonic sounds, they will not address whether these representations are able to segregate and identify concurrent speech sounds with different spectral envelopes. Thus, in Specific Aim 2, we will test the hypothesis that concurrent vowels differing in F0 can be successfully segregated and identified based on neural responses in AC, using spectral and temporal pattern-matching models (classifiers). Our approach is unique and clinically relevant in that it bridges the gap between single-neuron recordings in experimental animals and noninvasive recordings in humans. Results of this project will enhance our understanding of speech perception in real-world environments, and will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviate perceptual difficulties in the HI, elderly, and in individuals with certain developmental language disorders.
The current lack of understanding of how the brain separates concurrent sounds represents a major obstacle to addressing the perceptual difficulties of hearing-impaired individuals in complex acoustic environments (e.g., multiple voices in a cafeteria). The experiments described in this proposal will fill this gap by examining neural responses in different areas of auditory cortex to concurrent harmonic and periodic sounds such as those commonly encountered in speech (vowels) and music (notes). A better understanding of neural mechanisms of auditory scene analysis will ultimately contribute to public health by facilitating the development of more effective clinical approaches to alleviating auditory perceptual difficulties in the hearing-impaired, elderly, and in individuals with certain developmental language disorders.
|Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E et al. (2015) Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings. Brain Lang 148:37-50|
|Nourski, Kirill V; Steinschneider, Mitchell; Oya, Hiroyuki et al. (2015) Modulation of response patterns in human auditory cortex during a target detection task: an intracranial electrophysiology study. Int J Psychophysiol 95:191-201|
|Nourski, Kirill V; Steinschneider, Mitchell; Oya, Hiroyuki et al. (2014) Spectral organization of the human lateral superior temporal gyrus revealed by intracranial recordings. Cereb Cortex 24:340-52|
|Fishman, Yonatan I; Steinschneider, Mitchell; Micheyl, Christophe (2014) Neural representation of concurrent harmonic sounds in monkey primary auditory cortex: implications for models of auditory scene analysis. J Neurosci 34:12425-43|
|Nourski, Kirill V; Steinschneider, Mitchell; McMurray, Bob et al. (2014) Functional organization of human auditory cortex: investigation of response latencies through direct recordings. Neuroimage 101:598-609|
|Fishman, Yonatan I (2014) The mechanisms and meaning of the mismatch negativity. Brain Topogr 27:500-26|
|Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell (2013) Neural representation of harmonic complex tones in primary auditory cortex of the awake monkey. J Neurosci 33:10312-23|
|Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I (2013) Representation of speech in human auditory cortex: is it special? Hear Res 305:57-73|
|Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell (2012) Neural mechanisms of rhythmic masking release in monkey primary auditory cortex: implications for models of auditory scene analysis. J Neurophysiol 107:2366-82|
|Wagner, Monica; Shafer, Valerie L; Martin, Brett et al. (2012) The phonotactic influence on the perception of a consonant cluster /pt/ by native English and native Polish listeners: a behavioral and event related potential (ERP) study. Brain Lang 123:30-41|
Showing the most recent 10 out of 16 publications