Speech perception is inherently multisensory: when conversing with someone that we can see, our brains combine auditory information from the voice with visual information from the face. Speech perception lies at the heart of our interactions with other people and is thus one of our most important cognitive abilities. However, there is a large gap in our knowledge about this uniquely human skill because most experimental techniques available in humans suffer from poor spatiotemporal resolution. In order to remedy this gap, we will examine the neural mechanisms of audiovisual speech perception using intracranial recording (iEEG) in humans. Audiovisual speech perception occurs in the posterior superior temporal gyrus and sulcus (pSTG) Understanding the dynamics of the neural computations within pSTG at the mesoscale (neurons organized into columns and patches) has been impossible in humans. We propose to leverage two technical innovations within the fast-changing field of iEEG to study them for the first time: first, high-resolution intracranial electrode grids, which allow for recording from a cortical volume hundreds of times smaller than the electrodes in standard iEEG grids; second, NeuroGrids that record single-neuron activity from a non-penetrating film of electrodes placed on the cortical surface. Our causal inference model requires the existence of distinct auditory, visual and audiovisual speech representations.
Aim 1 will search for these representations in pSTG.
Aim 2 will examine low-frequency oscillations in pSTG to determine their role in multisensory speech perception. If successful, the Aims will provide a comprehensive account of the neural mechanisms of multisensory speech perception, including the long-standing mystery of the perceptual benefit of visual speech.

Public Health Relevance

Understanding speech is one of the most important functions of the human brain. We use information from both the auditory modality (the voice the of person we are talking to) and the visual modality (the facial movements of the person we are talking to) to understand speech. We will use intracranial encephalography to study the organization and operation of the brain during audiovisual speech perception.

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
Research Project--Cooperative Agreements (U01)
Project #
1U01NS113339-01
Application #
9829909
Study Section
Special Emphasis Panel (ZNS1)
Program Officer
Gnadt, James W
Project Start
2019-09-15
Project End
2024-05-31
Budget Start
2019-09-15
Budget End
2020-05-31
Support Year
1
Fiscal Year
2019
Total Cost
Indirect Cost
Name
Baylor College of Medicine
Department
Neurosurgery
Type
Schools of Medicine
DUNS #
051113330
City
Houston
State
TX
Country
United States
Zip Code
77030