To optimally estimate a property of the environment such as object size, location or orientation, one should use all available sensory information and combine it with prior information, i.e., a probability distribution across possible world states, reflecting knowledge of scenes one is likely to encounter. Sensory input typically arises from multiple sensory modalities, and is uncertain due to physical and neural noise. How are these sources of information combined? An ideal observer will combine all sources of information, taking into account the relia- bility of each source. In addition, such an observer needs to consider alternative causes of discrepancies be- tween sources of sensory information. Do two sources disagree so much that one should conclude they derive from different objects, and therefore have separate causes in the environment? Or, does a discrepancy indi- cate that one or both sources of information (e.g., sense modalities) have become uncalibrated? Many studies define ?optimal? cue integration as maximizing the reliability of the combined-cue estimate, which is generally consistent with human behavior. Do observers have access to the resulting reliability estimate to determine one's confidence in this estimate, perhaps to inform subsequent behavior? What computation does the brain use to solve these problems and how are these computations implemented? We propose research aimed to answer these questions. In our first aim we propose to develop biologically realistic models of how such computations are implemented, i.e., testable neural-network models of optimal behavior for sensory estimation, causal inference, recalibration and confidence. Second, we propose a series of experiments in an area that has been little studied in the framework of optimal cue integration: the combina- tion of visual, tactile and proprioceptive inputs for localization. These experiments test whether humans per- form optimal integration and recalibration of multisensory cues and priors under unclear causal structures in scenarios that are more complex than typically studied (i.e., involving dynamics, context effects, etc.) and thus more similar to the real world. These studies are important and innovative on their own. In addition, they will also provide the foundation for Aim 3, in which we will probe the implementation of cue combination, influence of priors, causal inference, recalibration and confidence in the human brain using fMRI. Together, the experi- mental data from Aims 2 & 3 will be used to test the models from Aim 1. These studies will shed light on the way in which multisensory stimuli are encoded to form a coherent percept, the information considered when perceptual decisions are made, and how vision is used to guide us in an ever-changing world. These experi- ments on normal humans will provide a starting point for understanding multisensory perception and perceptu- al adaptation in individuals in which these systems are compromised by conditions that impact sensory input (e.g., amblyopia, AMD, stroke).
The proposed work benefits public health by systematically assessing the behavioral and neural mechanisms for making perceptual decisions using multiple sensory systems and how optimal decisions must take into ac- count prior knowledge, the uncertainty of sensory information, and possible changes in the sensory systems themselves. A variety of medical conditions can impact both the reliability and accuracy of sensory information (e.g., AMD, amblyopia, stroke). The proposed research will improve our understanding of how sensory inputs from multiple modalities are combined so as to optimize a perceptual decision, and thus can serve to help in the design of rehabilitative plans and sensory-substitution devices when sensory input is disrupted (change in bias, gain and/or variability) by disease or other health-related conditions.
Showing the most recent 10 out of 113 publications