Communicating using spoken language feels effortless and automatic to healthy listeners with no hearing deficits or language processing problems. But the subjective ease belies the number and complexity of the many operations that -in aggregate- constitute speech perception. Transforming the acoustic signals that arrive at the ear into the abstract representations that underpin language processing requires a large number of little steps. When one or several of these intermediate operations malfunction, pathologies of hearing, speech perception, or language processing can be the consequence. Developing a theoretically well motivated and mechanistic, neurobiologically grounded understanding of this system remains one of the foundational challenges of the cognitive neuroscience of hearing, speech, and language. The research program outlined in this grant proposal strives to further develop a brain-based model of speech perception that is motivated by the insights of linguistic and psychological research, on the one hand, and is sensitive to the physical (acoustic) and neurobiological constraints of speech processing, on the other. The proposed experiments use the noninvasive electrophysiological neuroimaging technique magnetoencephalopgraphy (MEG), paired with magnetic resonance imaging (MRI). MEG is particularly useful because it combines very high temporal resolution (necessary because speech processing is fast) with good spatial resolution (necessary to understand the anatomic organization of the system). We investigate the speech processing system in the context of three specific research aims. The focus of the first aim is to understand more precisely the functional architecture in the brain. In particular, we want to understand the computational contribution of the critical regions mediating the processing of speech, both in perception and production. Furthermore, we test whether the same architectural (dual stream) model helps us understand both the perception of speech (old news) and the covert (internal) and overt production of speech (new news). The studies in the second aim test whether intrinsic brain rhythms (neural oscillations) that one observes (in animal and human studies) have a causal role in speech processing, as has recently been hypothesized. For example, the alignment of slow brain rhythms with the input signal may be necessary to understand speech (by parsing the continuous spoken input into the right 'chunk size' for further analysis). In the third aim, we turn to the perennial puzzle of brain asymmetry and its role in speech processing. We evaluate, building on the studies of oscillations, whether left and right auditory regions execute the same or different analyses of the speech input. As a group, these studies serve to further specify the 'parts list' of auditory and speech processing, with a special emphasis on timing and its implications for health and disease.

Public Health Relevance

Communicating with spoken language is the most common human interaction. The many complex processes that underlie comprehension can malfunction at many levels, leading to potential hearing, speech, or language disorders. Current noninvasive brain recording techniques can help us understand how the speech processing system functions in health and malfunctions in acquired and developmental disorders.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
New York University
Schools of Arts and Sciences
New York
United States
Zip Code
Getz, Heidi; Ding, Nai; Newport, Elissa L et al. (2018) Cortical tracking of constituent structure in language acquisition. Cognition 181:135-140
Teng, Xiangbin; Tian, Xing; Doelling, Keith et al. (2018) Theta band oscillations reflect more than entrainment: behavioral and neural evidence demonstrates an active chunking process. Eur J Neurosci 48:2770-2782
Gwilliams, Laura; Linzen, Tal; Poeppel, David et al. (2018) In Spoken Word Recognition, the Future Predicts the Past. J Neurosci 38:7585-7599
Dikker, Suzanne; Wan, Lu; Davidesco, Ido et al. (2017) Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom. Curr Biol 27:1375-1380
Tal, Idan; Large, Edward W; Rabinovitch, Eshed et al. (2017) Neural Entrainment to the Beat: The ""Missing-Pulse"" Phenomenon. J Neurosci 37:6331-6341
Teng, Xiangbin; Tian, Xing; Rowland, Jess et al. (2017) Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. PLoS Biol 15:e2000812
Ding, Nai; Melloni, Lucia; Yang, Aotian et al. (2017) Characterizing Neural Entrainment to Hierarchical Linguistic Units using Electroencephalography (EEG). Front Hum Neurosci 11:481
Ten Oever, Sanne; Schroeder, Charles E; Poeppel, David et al. (2017) Low-Frequency Cortical Oscillations Entrain to Subthreshold Rhythmic Auditory Stimuli. J Neurosci 37:4903-4912
Krakauer, John W; Ghazanfar, Asif A; Gomez-Marin, Alex et al. (2017) Neuroscience Needs Behavior: Correcting a Reductionist Bias. Neuron 93:480-490
Ding, Nai; Melloni, Lucia; Tian, Xing et al. (2017) Rule-based and Word-level Statistics-based Processing of Language: Insights from Neuroscience. Lang Cogn Neurosci 32:570-575

Showing the most recent 10 out of 104 publications