Multisensory integration is at the core of many cognitive phenomena. It provides a survival advantage because it allows the brain to combine the independent estimates available from different sensory modalities into a single estimate that is more accurate than any single modality in isolation. A key obstacle to progress is our lack of knowledge about how the brain combines different modalities. If sensory modality #1 claims that the environment is """"""""X"""""""" while sensory modality #2 claims that the environment is """"""""Y"""""""", how can the estimates best be combined to guide behavior? An important finding in behavioral studies is that multisensory integration is Bayes-optimal-that is, the reliability of different sensory modalities are taken into account when integrating them. Sensory inputs that are reliable (more informative) receive greater weights, while sensory inputs that are less informative receive less weight. The goal of this proposal is to uncover the neural mechanisms for optimal visual-tactile integration. Our central hypothesis takes the form of a simple model in which the strengths of connections from unisensory to multisensory brain areas are modulated by the reliability of the stimulus in each modality. An unreliable stimulus results in a weak connection, decreasing the effectiveness of that modality in the integration area, while a reliable stimulus results in a strong connection and increased ability to drive behavior. To test our model, we propose four specific aims that will examine two distinct paradigms: a touch delivered to the hand that is both seen and felt;and speech that is both seen and heard. In the first aim, we will determine the brain areas involved in these two types of stimuli using blood oxygen-level dependent functional magnetic resonance imaging (BOLD fMRI). We will test the hypothesis that the intraparietal sulcus (IPS) will respond to visual and tactile touch and that the superior temporal sulcus (STS) will respond to auditory and visual speech. In the second aim, we will show that neural connection strengths are proportional to stimulus reliability. We will test the hypothesis that the effective connectivity between unisensory and multisensory areas will be proportional to the reliability of the stimulus presented in that modality. In the third aim, we will demonstrate a correlation between multisensory brain activity and behavior using multi-voxel pattern analysis (MVPA). In the fourth aim, we will reveal a causal link between brain activity and behavioral multisensory integration. Using fMRI-guided transcranial magnetic stimulation (TMS), we will test the hypothesis that TMS of multisensory areas will eliminate the behavioral advantage of multisensory stimuli and the hypothesis that TMS of unisensory areas will impair behavioral performance proportional to the reliability of the stimulus in that modality.

Public Health Relevance

Multisensory integration is at the core of many cognitive phenomena. We will use functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in normal human subjects to study the organization and operation of the brain during multisensory integration.

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
Research Project (R01)
Project #
5R01NS065395-04
Application #
8416984
Study Section
Cognitive Neuroscience Study Section (COG)
Program Officer
Gnadt, James W
Project Start
2010-02-01
Project End
2015-01-31
Budget Start
2013-02-01
Budget End
2014-01-31
Support Year
4
Fiscal Year
2013
Total Cost
$278,102
Indirect Cost
$91,917
Name
University of Texas Health Science Center Houston
Department
Neurosciences
Type
Schools of Medicine
DUNS #
800771594
City
Houston
State
TX
Country
United States
Zip Code
77225
Ozker, Muge; Schepers, Inga M; Magnotti, John F et al. (2017) A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography. J Cogn Neurosci 29:1044-1060
Magnotti, John F; Beauchamp, Michael S (2017) A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech. PLoS Comput Biol 13:e1005229
Zhu, Lin L; Beauchamp, Michael S (2017) Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus. J Neurosci 37:2697-2708
Magnotti, John F; Mallick, Debshila Basu; Feng, Guo et al. (2016) Erratum to: Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers. Exp Brain Res 234:1333
Olds, Cristen; Pollonini, Luca; Abaya, Homer et al. (2016) Cortical Activation Patterns Correlate with Speech Understanding After Cochlear Implantation. Ear Hear 37:e160-72
Schepers, Inga M; Yoshor, Daniel; Beauchamp, Michael S (2015) Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech. Cereb Cortex 25:4103-10
Gurler, Demet; Doyle, Nathan; Walker, Edgar et al. (2015) A link between individual differences in multisensory speech perception and eye movements. Atten Percept Psychophys 77:1333-41
Jiang, Fang; Beauchamp, Michael S; Fine, Ione (2015) Re-examining overlap between tactile and visual motion responses within hMT+ and STS. Neuroimage 119:187-96
Magnotti, John F; Basu Mallick, Debshila; Feng, Guo et al. (2015) Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers. Exp Brain Res 233:2581-6
Beauchamp, Michael S (2015) The social mysteries of the superior temporal sulcus. Trends Cogn Sci 19:489-90

Showing the most recent 10 out of 27 publications