Face-to-face communication is the most important form of human interaction. When conversing we receive auditory information from the talker's voice and visual information from the talker's face. Combining these two sources of information is difficult, as they arrive quickly (about 5 syllables per second) and the correspondence between the vocal sounds and the mouth movements made by the talker is complex. We propose to study the neural mechanisms that underlie multisensory (auditory and visual) speech perception using electrocorticography (ECoG), a neural recording technique in which electrodes are implanted in the brain of epileptic patients. ECoG is the ideal technique for our research question because it measures human brain activity with very high temporal and spatial resolution (millisecond/millimeter). ECoG can be used to examine the diverse network of brain areas active during speech perception. Electrodes in auditory cortex are strongly activated by the auditory component of speech, while electrodes in occipital lobe are strongly activated by the visual component of speech. Between auditory and visual cortex, posterior lateral temporal cortex is thought to integrate auditory and visual speech. Poor hearing is one of the most common disabilities in veterans. Since speech is the basis of our social relationships, poor speech perception can lead to social isolation, depression and other health problems. A better understanding of the neural mechanisms underlying multisensory speech perception will allow us to improve veterans' ability to understand speech, leading to major improvements in their quality of life. To ensure that our results are immediately applicable to real world situations, we will study brain responses to natural English language words spoken by English talkers. In addition to its potential clinical benefits, the proposed research will also have a significant impact on basic science. Multisensory integration is a major new field in neuroscience and has proven to be fertile ground for mathematical models of brain and behavior. Our work will serve as a bridge between experiments and models of simple multisensory behavior, such as auditory-visual localization and more complex cognitive tasks, exemplified by multisensory speech perception. The successful completion of these studies will represent a major step forward in our understanding of the neural substrates of multisensory speech perception.

Public Health Relevance

The ability to use visual information from a talker's face during speech perception can help us overcome hearing deficits. This is critical for U.S. veterans because many suffer from an inability to process and understand speech because of an acquired hearing loss. At present, little is known about the neural computations that enable efficient combination of auditory and visual information. We propose to remedy this knowledge gap using electrocorticography, which is ideally suited for studying speech perception because of its excellent temporal and spatial (millisecond/millimeter) resolution. The goal of our project is to determine how auditory voice sounds and visual mouth movements are transformed into perceived speech. A better understanding of the neural anatomy and computations enabling multisensory speech perception will allow us to design therapies to promote multisensory integration in patients with sensory dysfunction, especially hearing loss, but also other language impairments, such as those cause by stroke.

Agency
National Institute of Health (NIH)
Institute
Veterans Affairs (VA)
Type
Non-HHS Research Projects (I01)
Project #
2I01CX001122-04A1
Application #
9781106
Study Section
Special Emphasis Panel (ZRD1)
Project Start
2015-04-01
Project End
2023-12-31
Budget Start
2020-01-01
Budget End
2020-12-31
Support Year
4
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Michael E Debakey VA Medical Center
Department
Type
DUNS #
078446044
City
Houston
State
TX
Country
United States
Zip Code
77030
Magnotti, John F; Beauchamp, Michael S (2018) Published estimates of group differences in multisensory integration are inflated. PLoS One 13:e0202908
Ozker, Muge; Yoshor, Daniel; Beauchamp, Michael S (2018) Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing. Front Hum Neurosci 12:141
Ozker, Muge; Yoshor, Daniel; Beauchamp, Michael S (2018) Frontal cortex selects representations of the talker's mouth to aid in speech perception. Elife 7:
Ozker, Muge; Schepers, Inga M; Magnotti, John F et al. (2017) A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography. J Cogn Neurosci 29:1044-1060
Bosking, William H; Beauchamp, Michael S; Yoshor, Daniel (2017) Electrical Stimulation of Visual Cortex: Relevance for the Development of Visual Cortical Prosthetics. Annu Rev Vis Sci 3:141-166
Schepers, Inga M; Yoshor, Daniel; Beauchamp, Michael S (2015) Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech. Cereb Cortex 25:4103-10