Communicating using spoken language feels effortless and automatic to healthy listeners with no hearing deficits or language processing problems. But the subjective ease belies the number and complexity of the many operations that -in aggregate- constitute speech perception. Transforming the acoustic signals that arrive at the ear into the abstract representations that underpin language processing requires a large number of little steps. When one or several of these intermediate operations malfunction, pathologies of hearing, speech perception, or language processing can be the consequence. Developing a theoretically well motivated and mechanistic, neurobiologically grounded understanding of this system remains one of the foundational challenges of the cognitive neuroscience of hearing, speech, and language. The research program outlined in this grant proposal strives to further develop a brain-based model of speech perception that is motivated by the insights of linguistic and psychological research, on the one hand, and is sensitive to the physical (acoustic) and neurobiological constraints of speech processing, on the other. The proposed experiments use the noninvasive electrophysiological neuroimaging technique magnetoencephalopgraphy (MEG), paired with magnetic resonance imaging (MRI). MEG is particularly useful because it combines very high temporal resolution (necessary because speech processing is fast) with good spatial resolution (necessary to understand the anatomic organization of the system). We investigate the speech processing system in the context of three specific research aims. The focus of the first aim is to understand more precisely the functional architecture in the brain. In particular, we want to understand the computational contribution of the critical regions mediating the processing of speech, both in perception and production. Furthermore, we test whether the same architectural (dual stream) model helps us understand both the perception of speech (old news) and the covert (internal) and overt production of speech (new news). The studies in the second aim test whether intrinsic brain rhythms (neural oscillations) that one observes (in animal and human studies) have a causal role in speech processing, as has recently been hypothesized. For example, the alignment of slow brain rhythms with the input signal may be necessary to understand speech (by parsing the continuous spoken input into the right 'chunk size'for further analysis). In the third aim, we turn to the perennial puzzle of brain asymmetry and its role in speech processing. We evaluate, building on the studies of oscillations, whether left and right auditory regions execute the same or different analyses of the speech input. As a group, these studies serve to further specify the 'parts list'of auditory and speech processing, with a special emphasis on timing and its implications for health and disease.
Communicating with spoken language is the most common human interaction. The many complex processes that underlie comprehension can malfunction at many levels, leading to potential hearing, speech, or language disorders. Current noninvasive brain recording techniques can help us understand how the speech processing system functions in health and malfunctions in acquired and developmental disorders.
|Scharinger, Mathias; Monahan, Philip J; Idsardi, William J (2016) Linguistic category structure influences early auditory processing: Converging evidence from mismatch responses and cortical oscillations. Neuroimage 128:293-301|
|Steinberg Lowe, Mara; Lewis, Gwyneth A; Poeppel, David (2016) Effects of Part- and Whole-Object Primes on Early MEG Responses to Mooney Faces and Houses. Front Psychol 7:147|
|Tian, Xing; Zarate, Jean Mary; Poeppel, David (2016) Mental imagery of speech implicates two mechanisms of perceptual reactivation. Cortex 77:1-12|
|Almeida, Diogo; Poeppel, David; Corina, David (2016) The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning. Lang Cogn Neurosci 31:361-374|
|Teng, Xiangbin; Tian, Xing; Poeppel, David (2016) Testing multi-scale processing in the auditory system. Sci Rep 6:34390|
|Ding, Nai; Melloni, Lucia; Zhang, Hang et al. (2016) Cortical tracking of hierarchical linguistic structures in connected speech. Nat Neurosci 19:158-64|
|Lewis, Gwyneth A; Poeppel, David; Murphy, Gregory L (2015) The neural bases of taxonomic and thematic conceptual relations: an MEG study. Neuropsychologia 68:176-89|
|Chait, Maria; Greenberg, Steven; Arai, Takayuki et al. (2015) Multi-time resolution analysis of speech: evidence from psychophysics. Front Neurosci 9:214|
|Overath, Tobias; McDermott, Josh H; Zarate, Jean Mary et al. (2015) The cortical analysis of speech-specific temporal structure revealed by responses to sound quilts. Nat Neurosci 18:903-11|
|Rimmele, Johanna M; Zion Golumbic, Elana; SchrÃ¶ger, Erich et al. (2015) The effects of selective attention and speech acoustics on neural speech-tracking in a multi-talker scene. Cortex 68:144-54|
Showing the most recent 10 out of 90 publications