The representation of speech and other complex auditory signals in the human brain constitutes a major interdisciplinary challenge for cognitive neuroscience. Understanding in a principled manner how acoustic signals are transformed and ultimately recognized as words in a speaker's mental dictionary requires the integration of knowledge across fields ranging from single-cell recording in auditory cortex to linguistic theory. The research program outlined here is focused on two subroutines in speech processing. In the context of the first specific aim, the hypothesis is investigated that speech is analyzed concurrently on two time scales in human auditory cortex, with one corresponding to analysis at the syllabic scale, another at the segmental (phonemic) scale. This multi-time resolution model, which provides an account of hemispheric asymmetry in audition, is tested in a series of behavioral and electrophysiological studies. The goal is to provide a theoretically motivated and neurobiologically sensible answer to how acoustic signals are fractionated in time and how they map to words stored in the brain.
The second aim encompasses both behavioral (often audio- visual) and electrophysiological studies that test how (specifically, how abstractly) speech and words are represented in the human brain. The goal is to test models of the cortical encoding of speech sounds and words. The principal method used in this research program is magnetoencephalography (MEG), typically with parallel behavioral studies performed. Other non-invasive recording modalities are also employed (EEG, fMRI) to validate and extend data from any single approach.

Public Health Relevance

Successfully perceiving speech and recognizing words are processes at the basis of human communication. A mechanistic characterization of the brain structures that mediate these skills is essential to understand the range of disorders associated with problems in speech processing. Health-related phenomena ranging from dyslexia and autism in childhood to aphasia and Alzheimer's disease in the aging population have been repeatedly linked to problems with the auditory analysis of complex signals and the ability to process words appropriately. The development of innovative diagnostic, interventional, and therapeutic approaches critically depends on our enriched knowledge of the brain basis of the processes underlying human speech.

National Institute of Health (NIH)
National Institute on Deafness and Other Communication Disorders (NIDCD)
Research Project (R01)
Project #
Application #
Study Section
Language and Communication Study Section (LCOM)
Program Officer
Shekim, Lana O
Project Start
Project End
Budget Start
Budget End
Support Year
Fiscal Year
Total Cost
Indirect Cost
New York University
Schools of Arts and Sciences
New York
United States
Zip Code
Farbood, Morwaread M; Rowland, Jess; Marcus, Gary et al. (2015) Decoding time for the identification of musical key. Atten Percept Psychophys 77:28-35
Scharinger, Mathias; Idsardi, William J (2014) Sparseness of vowel category structure: Evidence from English dialect comparison. Lingua 140:35-51
Poeppel, David (2014) The neuroanatomic and neurophysiological infrastructure for speech and language. Curr Opin Neurobiol 28:142-9
ten Oever, Sanne; Schroeder, Charles E; Poeppel, David et al. (2014) Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia 63:43-50
Lewis, Gwyneth; Poeppel, David (2014) The role of visual representations during the lexical access of spoken words. Brain Lang 134:1-10
Doelling, Keith B; Arnal, Luc H; Ghitza, Oded et al. (2014) Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing. Neuroimage 85 Pt 2:761-8
Dillon, Brian; Dunbar, Ewan; Idsardi, William (2013) A single-stage approach to learning phonological categories: insights from Inuktitut. Cogn Sci 37:344-77
Xiang, Juanjuan; Poeppel, David; Simon, Jonathan Z (2013) Physiological evidence for auditory modulation filterbanks: cortical responses to concurrent modulations. J Acoust Soc Am 133:EL7-12
Almeida, Diogo; Poeppel, David (2013) Word-specific repetition effects revealed by MEG and the implications for lexical access. Brain Lang 127:497-509
Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E et al. (2013) Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party". J Neurosci 33:1417-26

Showing the most recent 10 out of 64 publications