Behaviors such as action selection and action sequencing require the shaping of dynamical neural activity patterns through learning. Understanding how such learning occurs is challenging due to the involvement of multiple brain areas and due to the fact that such behaviors involve multiple timescales, from the granular level of moment-to-moment limb control to the cognitive level of goal-driven planning. Modern experiments, which are able to record from large numbers of neurons in behaving animals, and in some cases to do so simultaneously in multiple brain areas and throughout the learning of a task, are providing a path forward for addressing these challenges. The overall goal of my research is to facilitate the synthesis and understanding of data from such experiments by constructing models of the brain circuits relevant for a given behavior, addressing how the neural activity in these circuits relates to behavior and how it is shaped over time through learning. In recent work, I have developed expertise in learned dynamics in neural circuits through three related lines of research. First, I have modeled the neural computations underlying timing-related behavior and its implementation in the basal ganglia. Second, I have mathematically derived biologically plausible learning rules to underlie supervised learning of time-dependent tasks in recurrent neural networks. Finally, I have worked on a theory- experiment collaboration in which modeling with recurrent neural networks was used in tandem with brain-machine interface experiments in monkeys to address the structure of neural representations within primary motor cortex. In future work, I will build on this experience to address how dynamical neural activity patterns are learned in order to produce complex behaviors over both short and long timescales. One way to begin addressing this question is with the theory of reinforcement learning, which provides a rich and powerful framework for addressing how actions should be performed in order to maximize future rewards. Given their established role in implementing reinforcement learning, the basal ganglia form the starting point for my proposed research program.
I first aim to revise the classical model of basal ganglia function by constructing and mathematically analyzing models that solve computationally challenging tasks and by comparing the results with new data from my experimental collaborators (Aim 1). Building on this work and making use of my prior experience training recurrent neural networks to model motor tasks, I will next consider learning in motor cortex and how it complements learning in basal ganglia (Aim 2), again comparing models with new experimental data. Finally, I will construct models of the thalamocortico-basal ganglia circuit by incorporating knowledge about the neural representations throughout this circuit and by leveraging recent advances in machine learning. In this way, I will address how the mammalian brain implements hierarchical reinforcement learning to integrate behaviors over short and long timescales (Aim 3). Taken together, this research will advance understanding of how neural activity facilitates action selection and sequencing in complex behaviors.

Public Health Relevance

Behaviors involving action selection and action sequencing require the shaping of dynamical neural activity patterns through learning. Understanding how such learning occurs is challenging because multiple brain areas are involved and because such behaviors may involve multiple timescales, from low-level limb movements to the cognitive level of goal-driven planning. By developing circuit models of the basal ganglia and their cortical inputs, using new data from my experimental collaborators to inform and test these models, and drawing on recent advances in reinforcement learning from the field of machine learning, I will advance the understanding of how learned motor behaviors are implemented in the brain.

Agency
National Institute of Health (NIH)
Institute
National Institute of Neurological Disorders and Stroke (NINDS)
Type
Career Transition Award (K99)
Project #
1K99NS114194-01
Application #
9871193
Study Section
Neurological Sciences Training Initial Review Group (NST)
Program Officer
Chen, Daofen
Project Start
2019-12-01
Project End
2021-11-30
Budget Start
2019-12-01
Budget End
2020-11-30
Support Year
1
Fiscal Year
2020
Total Cost
Indirect Cost
Name
Columbia University (N.Y.)
Department
Neurosciences
Type
Schools of Medicine
DUNS #
621889815
City
New York
State
NY
Country
United States
Zip Code
10032