The same pattern of neural activity can correspond to multiple events in the world. The brain resolves this ambiguity by inferring which causal model best explains a sensory input pattern, and generating beliefs about the sensory variables in this model. The neural basis of causal inference is difficult to study, however, because this internal model is only partly accessible through behavior. Normative modeling provides a powerful way to circumvent this problem: if these computations are close enough to optimal, beliefs inferred by normative models can be used to identify potential neural correlates. This project's goal is to develop normative models of the motion tasks investigated experimentally in Projects B and C, to generate trial-by-trial as well as dynamic moment-by-moment predictions of key latent variables in the computation, and to investigate their neural implementation using data collected in those projects. These models will be fit to behavioral data to determine how the brain uses causal inference applied to retinal image motion to infer the animal's self-motion, to decide whether or not the object is moving in the world, and to infer the object's velocity and depth. For the trial-based tasks in Project B, Aim 1 will start with the generative model of sensory inputs and invert it to produce causal inferences. Preliminary work has extended and unified previous efforts into a novel Bayesian model that uses retinal motion and depth to segment visual scenes during self-motion. Psychophysical tests show that this static model agrees with perceptual experience. This model will be used to predict neural responses in cortical motion-processing areas MT and MSTd by assuming that these responses represent Bayesian posterior beliefs.
In Aim 2, because the real world is not static, the team will develop a dynamic model that describes normative causal inference and inverse rational control in real-time. This model will predict which latent variables the brain needs to track in the continuous, naturalistic tasks of Project C. Preliminary work shows that a simplified model using dynamic causal inference can keep a running estimate of self-motion velocity and of whether an object is stationary or moving.
Aim 2 will extend this model to more complex sensory inputs and to support object motion dynamics on timescales similar to those of inference. It will also develop a real-time rational control model to generate quantitative hypotheses about the neural correlates of goal-directed control for animals acting upon the percepts from causal inference. We will fit this model to observed behavior to reverse-engineer animals' beliefs during goal-directed control. When the proposed work is complete, the static model will link three physically interconnected variables?object motion, self motion, and depth?which may be computed and represented in different neural populations, to predict how beliefs about these variables influence each other and propagate across the brain. The dynamic model will extend the study of causal inference to more realistic conditions, in which sensory data and beliefs evolve over time, to close the understudied loop between perception and action.