Overview: Among the most celebrated success stories in computational neuroscience is the discovery that many aspects of decision-making can be understood in terms of the formal framework of reinforcement learning (RL). Ideas drawn from RL have shed light on many behavioral phenomena in learning and action selection, on the functional anatomy and neural processes underlying reward-driven behavior, and on fundamental aspects of neuromodulatory function. However, for all these successes, RL-based work is haunted by an inconvenient truth: Standard RL algorithms scale poorly to large, complex problems. If human learning and decision-making are driven by RL-like mechanisms, how is it that we cope with the kinds of rich, large-scale tasks that are typical of everyday life? Existing research in both psychology and neuroscience hints at one answer to this question: Complex problems can be conquered if the decision-maker is equipped with compact, intelligently formatted representations of the task. This principle is seen in studies of expert play in chess, which show that chess masters leverage highly integrative internal representations of board configurations; in studies of frontal and parietal lobe function, which have revealed receptive fields strongly shaped by task contingencies; and studies on the hippocampus, which point to the role of this structure in supporting a hierarchically organized 'cognitive map,' of task space. Not coincidentally, the critical role of representation has come increasingly to the fore in RL-based research in machine learning and robotics, with growing interest in techniques for dimensionality reduction, hierarchy and deep learning. The present project aims toward a systematic, empirically validated account of the role of representation in supporting RL and goal-directed behavior at large. The project brings together three investigators with complementary expertise in cognitive and computational neuroscience (Botvinick, Gershman) and machine learning and robotics (Konidaris). Together, we propose an integrative, interdisciplinary program of research, applying behavioral and neuroimaging work with human subjects, computational modeling of neurophysiological and behavioral data, formal mathematical work and simulations with artificial agents. The proposed studies are diverse in theme and method, but work together toward a theory that is both formally grounded and empirically constrained. At a more concrete level, our research focuses on four specific classes of representation, considering the computational impact of each for RL, as well as the relevance of each to neuroscience and human behavior. As detailed in our Project Description, these include (1) metric embedding, (2) spectral decomposition, (3) hierarchical representation and (4) symbolic representation. In addition to investigating the implications of each of these four forms of representation individually, we hypothesize that they fit together into a tiered system, which works as a whole to support the sometimes competing demands of learning and action control. Intellectual Merit (provided by applicant): Understanding how representational structure impacts learning and decision making is a core challenge in cognitive science, behavioral neuroscience and, artificial intelligence. Success in establishing a computationally explicit, empirically validated theory in this area, with a specific focus on the role of representation in R, would represent an important achievement with wide repercussions. The strategy of leveraging conceptual tools from machine learning to investigate human behavior and brain function can offer considerable scientific leverage, as our own previous research illustrates. The proposed work is motivated by and builds upon established lines of research, bringing these together in order to capitalize on opportunities for synergy. In addition to answering specific empirical and computational questions, the proposed work aims to open up new avenues for future research in an important area of inquiry. Broader Impact (provided by applicant): The proposed work lies at the crossroads of neuroscience, psychology, artificial intelligence and machine learning, and promises to advance the growing exchange among these fields. The project brings together investigators with contrasting disciplinary affiliations, with the explicit goal of bridging between intellectual cultres. The proposed work is likely to find a wide scientific audience, given its relevance to cognitive and developmental psychology, behavioral, cognitive and systems neuroscience, and behavioral economics. However, the work is likely to be of equal interest within artificial intelligence, machine learning, and robotics, where a current challenge is precisely to understand how representation learning can allow RL to scale up to large problems. Representational approaches to RL are already of intense interest within industry, where the present investigators have a record of active engagement. The topic of the proposed work has applicability in other areas as well, including education and training, and military and medical decision support. The plan for the project has a robust training component at both graduate and postdoctoral levels, with a commitment to fostering involvement of underrepresented minorities, as well as international engagement.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Research Project (R01)
Project #
5R01MH109177-02
Application #
9126614
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Ferrante, Michele
Project Start
2015-09-01
Project End
2018-05-31
Budget Start
2016-06-01
Budget End
2017-05-31
Support Year
2
Fiscal Year
2016
Total Cost
Indirect Cost
Name
Princeton University
Department
Type
Organized Research Units
DUNS #
002484665
City
Princeton
State
NJ
Country
United States
Zip Code
Tomov, Momchil S; Dorfman, Hayley M; Gershman, Samuel J (2018) Neural Computations Underlying Causal Structure Learning. J Neurosci 38:7143-7157
Momennejad, Ida; Otto, A Ross; Daw, Nathaniel D et al. (2018) Offline replay supports planning in human reinforcement learning. Elife 7:
Starkweather, Clara Kwon; Gershman, Samuel J; Uchida, Naoshige (2018) The Medial Prefrontal Cortex Shapes Dopamine Reward Prediction Errors under State Uncertainty. Neuron 98:616-629.e6
Gershman, Samuel J (2018) Deconstructing the human algorithms for exploration. Cognition 173:34-42
Gershman, Samuel J (2017) Predicting the Past, Remembering the Future. Curr Opin Behav Sci 17:7-13
Linderman, Scott W; Gershman, Samuel J (2017) Using computational theory to constrain statistical models of neural data. Curr Opin Neurobiol 46:14-24
Starkweather, Clara Kwon; Babayan, Benedicte M; Uchida, Naoshige et al. (2017) Dopamine reward prediction errors reflect hidden-state inference across time. Nat Neurosci 20:581-589
Konidaris, George (2016) Constructing Abstraction Hierarchies Using a Skill-Symbol Loop. IJCAI (U S) 2016:1648-1654
Doshi-Velez, Finale; Konidaris, George (2016) Hidden Parameter Markov Decision Processes: A Semiparametric Regression Approach for Discovering Latent Task Parametrizations. IJCAI (U S) 2016:1432-1440
Zhou, Yilun; Konidaris, George (2016) Representing and Learning Complex Object Interactions. Robot Sci Syst 2016: