The primary visual cortex (V1) can learn to encode spatiotemporal relationships based on visual experience and uses the resulting functional memory to actively predict how the visual scene will unfold in time. This demonstrates expression of a canonical cortical function localized in an experimentally accessible region. We propose to leverage the tools of modern neuroscience to mechanistically dissect this ability in the mouse, with the overall aim of developing a description of how the neocortex learns to represent temporal information. The primary goals of this work are first to understand how similar forms of visual stimulation drive different forms of short and long-term plasticity that can encode either spatial or temporal information, and second to identify the distinct mechanisms involved. In addition to their direct relevance to sensory neurobiology, various psychiatric and neurological disorders, and visual physiology our experiments will address the wider question of how cortical circuits learn to use temporal relationships to build predictive models of the world, the answer to which remains as murky as it is critical for our understanding of the brain.
One of the most important unanswered questions in neuroscience is how the brain learns to recognize, represent, and predict temporal relationships. This project will address this issue by using the early visual system to determine how cortical circuits encode and predict spatiotemporal visual information. Understanding the mechanisms supporting this ability will provide deeper insight into brain function and into various neurological and psychiatric disorders that disrupt the ability to accurately process temporal information.