Signals from the natural environment are processed by neuronal populations in the cortex. Understanding the relationship between those signals and cortical activity is central to understanding normal cortical function and how it is impaired in psychiatric and neurodevelopmental disorders. Substantial progress has been made in elucidating cortical processing of simple, parametric stimuli, and computational technology is improving descriptions of neural responses to naturalistic stimuli. However, how cortical populations encode the complex, natural inputs received during every day perceptual experience is largely unknown. This project aims to elucidate how natural visual inputs are represented by neuronal populations in primary visual cortex (V1). Progress to date has been limited primarily by two factors. First, during natural vision, the inputs to V1 neurons are always embedded in a spatial and temporal context, but how V1 integrates this contextual information in natural visual inputs is poorly understood. Second, prior work focused almost exclusively on single-neuron firing rate, but to understand cortical representations one must consider the structure of population activity? the substantial trial-to-trial variability that is shared among neurons and evolves dynamically?as this structure influences population information and perception. The central hypothesis of this project is that cortical response structure is modulated by visual context to approximate an optimal representation of natural visual inputs. To test the hypothesis, this project combines machine learning to quantify the statistical properties of natural visual inputs, with a theory of how cortical populations should encode those images to achieve an optimal representation, to arrive at concrete, falsifiable predictions for V1 response structure. The predictions will be tested with measurements of population activity in V1 of awake monkeys viewing natural images and movies.
Specific Aim 1 will determine whether modulation of V1 response structure by spatial context in static images is consistent with optimal encoding of those images, and will compare the predictive power of the proposed model to alternative models.
Specific Aim 2 addresses V1 encoding of dynamic natural inputs, and will test whether modulation of V1 activity by temporal context is tuned to the temporal structure of natural sensory signals, as required for optimality. As both spatial and temporal are present simultaneously during natural vision, Specific Aim 3 will determine visual input statistics in free-viewing animals, and test space-time interactions in V1 activity evoked by those inputs. This project will provide the first test of a unified functional theory of contextual modulation in V1 encoding of natural visual inputs, and shed light on key aspects of natural vision that have been neglected to date.
This project aims to determine how neurons in the visual cortex represent the inputs encountered during perceptual experience in the natural environment, through correct integration of visual information across space and time. In individuals with neurodevelopmental and psychiatric disorders, integration is often miscalibrated leading to perceptual impairments. Our study will advance knowledge of the relationship between natural sensory inputs and cortical activity, which is central to understanding normal cortical function and how it is impaired in patient populations.