The goal of this project is to use multimodal (functional magnetic resonance imaging (fMRI) and electroencephalography (EEG)) neuroimaging methods to examine the nature of linguistic and non-linguistic influences on brainstem encoding of speech signals in adults. In direct conflict with the concept of auditory brainstem nuclei as passive relay stations for behaviorally-relevant signals, recent studies have demonstrated active transformation of the signal, as represented in the auditory midbrain and brainstem. However, the mechanisms underlying such early sensory plasticity are unclear. In this proposal, an integrative model of subcortical auditory plasticity is posited (predictive tunin), which argues for a continuous, online modulation of bottom-up signals via corticofugal pathways, based on an algorithm that constantly anticipates incoming stimulus regularities, thereby transforming representation in the auditory pathway. This proposal utilizes cross-language and case-control designs and innovative EEG methods to directly address the role of brainstem circuitry in dynamic encoding of speech and test competing neural models (local modulation vs. predictive tuning). Causal influences (top-down vs. bottom-up) during speech processing will be tested using fMRI effective connectivity analyses. The proposed experiments will provide a comprehensive examination of mechanisms underlying brainstem plasticity and expand the understanding of the neurobiology of speech perception beyond the current corticocentric focus. Recent studies show that a number of clinical populations exhibit speech-encoding deficits at the level of the brainstem. The design and analysis methods developed in this proposal can be used to evaluate the locus (bottom-up versus top-down) of such encoding deficits.

Public Health Relevance

The goal of this project is to study top-down influences on human brainstem function as it relates to the dynamics of speech processing. Understanding mechanistic aspects of human brainstem function the role of will provide critical insights into developing biomarkers that can evaluate the locus of speech processing deficits (bottom-up versus top-down) in clinical populations and monitor the effects of auditory and linguistic training.

Agency
National Institute of Health (NIH)
Institute
National Institute on Deafness and Other Communication Disorders (NIDCD)
Type
Research Project (R01)
Project #
5R01DC013315-02
Application #
8827317
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Platt, Christopher
Project Start
2014-04-01
Project End
2019-03-31
Budget Start
2015-04-01
Budget End
2016-03-31
Support Year
2
Fiscal Year
2015
Total Cost
$414,887
Indirect Cost
$97,739
Name
University of Texas Austin
Department
Other Health Professions
Type
Schools of Arts and Sciences
DUNS #
170230239
City
Austin
State
TX
Country
United States
Zip Code
78712
Feng, Gangyi; Gan, Zhenzhong; Wang, Suiping et al. (2018) Task-General and Acoustic-Invariant Neural Representation of Speech Categories in the Human Brain. Cereb Cortex 28:3241-3254
Reetzke, Rachel; Xie, Zilong; Llanos, Fernando et al. (2018) Tracing the Trajectory of Sensory Plasticity across Different Stages of Speech Learning in Adulthood. Curr Biol 28:1419-1427.e4
Deng, Zhizhou; Chandrasekaran, Bharath; Wang, Suiping et al. (2018) Training-induced brain activation and functional connectivity differentiate multi-talker and single-talker speech training. Neurobiol Learn Mem 151:1-9
Llanos, Fernando; Xie, Zilong; Chandrasekaran, Bharath (2017) Hidden Markov modeling of frequency-following responses to Mandarin lexical tones. J Neurosci Methods 291:101-112
Lam, Boji P W; Xie, Zilong; Tessmer, Rachel et al. (2017) The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise. J Speech Lang Hear Res 60:1662-1673
Yi, Han G; Xie, Zilong; Reetzke, Rachel et al. (2017) Vowel decoding from single-trial speech-evoked electrophysiological responses: A feature-based machine learning approach. Brain Behav 7:e00665
Van Engen, Kristin J; Xie, Zilong; Chandrasekaran, Bharath (2017) Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect. Atten Percept Psychophys 79:396-403
Lau, Joseph C Y; Wong, Patrick C M; Chandrasekaran, Bharath (2017) Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns. J Neurophysiol 117:594-603
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath (2017) Stability and plasticity in neural encoding of linguistically relevant pitch patterns. J Neurophysiol 117:1407-1422
Yi, Han Gyol; Chandrasekaran, Bharath (2016) Auditory categories with separable decision boundaries are learned faster with full feedback than with minimal feedback. J Acoust Soc Am 140:1332

Showing the most recent 10 out of 29 publications