Multisensory integration is at the core of many cognitive phenomena. It provides a survival advantage because it allows the brain to combine the independent estimates available from different sensory modalities into a single estimate that is more accurate than any single modality in isolation. A key obstacle to progress is our lack of knowledge about how the brain combines different modalities. If sensory modality #1 claims that the environment is """"""""X"""""""" while sensory modality #2 claims that the environment is """"""""Y"""""""", how can the estimates best be combined to guide behavior? An important finding in behavioral studies is that multisensory integration is Bayes-optimal-that is, the reliability of different sensory modalities are taken into account when integrating them. Sensory inputs that are reliable (more informative) receive greater weights, while sensory inputs that are less informative receive less weight. The goal of this proposal is to uncover the neural mechanisms for optimal visual-tactile integration. Our central hypothesis takes the form of a simple model in which the strengths of connections from unisensory to multisensory brain areas are modulated by the reliability of the stimulus in each modality. An unreliable stimulus results in a weak connection, decreasing the effectiveness of that modality in the integration area, while a reliable stimulus results in a strong connection and increased ability to drive behavior. To test our model, we propose four specific aims that will examine two distinct paradigms: a touch delivered to the hand that is both seen and felt;and speech that is both seen and heard. In the first aim, we will determine the brain areas involved in these two types of stimuli using blood oxygen-level dependent functional magnetic resonance imaging (BOLD fMRI). We will test the hypothesis that the intraparietal sulcus (IPS) will respond to visual and tactile touch and that the superior temporal sulcus (STS) will respond to auditory and visual speech. In the second aim, we will show that neural connection strengths are proportional to stimulus reliability. We will test the hypothesis that the effective connectivity between unisensory and multisensory areas will be proportional to the reliability of the stimulus presented in that modality. In the third aim, we will demonstrate a correlation between multisensory brain activity and behavior using multi-voxel pattern analysis (MVPA). In the fourth aim, we will reveal a causal link between brain activity and behavioral multisensory integration. Using fMRI-guided transcranial magnetic stimulation (TMS), we will test the hypothesis that TMS of multisensory areas will eliminate the behavioral advantage of multisensory stimuli and the hypothesis that TMS of unisensory areas will impair behavioral performance proportional to the reliability of the stimulus in that modality.
Multisensory integration is at the core of many cognitive phenomena. We will use functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in normal human subjects to study the organization and operation of the brain during multisensory integration.
|Micheli, Cristiano; Schepers, Inga M; Ozker, Müge et al. (2018) Electrocorticography reveals continuous auditory and visual speech tracking in temporal and occipital cortex. Eur J Neurosci :|
|Magnotti, John F; Beauchamp, Michael S (2018) Published estimates of group differences in multisensory integration are inflated. PLoS One 13:e0202908|
|Ozker, Muge; Yoshor, Daniel; Beauchamp, Michael S (2018) Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing. Front Hum Neurosci 12:141|
|Ozker, Muge; Yoshor, Daniel; Beauchamp, Michael S (2018) Frontal cortex selects representations of the talker's mouth to aid in speech perception. Elife 7:|
|Rennig, Johannes; Beauchamp, Michael S (2018) Free viewing of talking faces reveals mouth and eye preferring regions of the human superior temporal sulcus. Neuroimage 183:25-36|
|Zhu, Lin L; Beauchamp, Michael S (2017) Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus. J Neurosci 37:2697-2708|
|Ozker, Muge; Schepers, Inga M; Magnotti, John F et al. (2017) A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography. J Cogn Neurosci 29:1044-1060|
|Magnotti, John F; Beauchamp, Michael S (2017) A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech. PLoS Comput Biol 13:e1005229|
|Magnotti, John F; Mallick, Debshila Basu; Feng, Guo et al. (2016) Erratum to: Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers. Exp Brain Res 234:1333|
|Olds, Cristen; Pollonini, Luca; Abaya, Homer et al. (2016) Cortical Activation Patterns Correlate with Speech Understanding After Cochlear Implantation. Ear Hear 37:e160-72|
Showing the most recent 10 out of 32 publications