The ability to learn and think about complex situations is central to a range of human cognitive functions, including navigation, reasoning, and decision making. Numerous theories across these domains rely on representations of states of these external and internal environments, but how they acquire such representations remains unknown. My overall goal is to understand how animals, including humans, can reason and learn in such complex environments. In this project, we propose to investigate how animals are able to learn these representations in a complex sequential decision making task in monkeys. Using a novel behavioral task inspired by the board game battleship, monkeys search for hidden shapes on a screen. There are millions of possible shapes, and yet monkeys are capable learners, vastly outperforming classic reinforcement learning algorithms. How monkeys can learn the shapes so quickly remains mysterious. In addition to these unknown computational foundations for learning, the neural mechanisms that support this behavior are also unexplored. Recent studies including electrophysiology and lesion research have found signatures of state representations in the amygdala (AMYG) and the orbitofrontal cortex (OFC). However, these studies have only used very few states that only require associations to learn. Moreover, the interactions and computational roles of the regions have not been characterized. In light of these gaps in our understanding of learning in complex tasks, we will use the battleship task to elucidate 1) the aspects of the environment that drive learning representations of complex states, 2) the computational foundations of this learning using behavioral model fitting and deep neural networks, and 3) the neural mechanisms that underwrite this capacity in the AMYG-OFC circuit. We hypothesize that OFC represents hidden task states, those that cannot be fully defined in terms of perceptible stimuli and outcomes. We further hypothesize that AMYG plays a central role in learning and updating these representations by constructing an online representation of the current environment using input from OFC as well as from sensory processing and memory regions, representing current stimuli, outcomes, and associations. We posit an observer-critic architecture underlies learning representations of complex tasks, with AMYG activity computing and sending a teaching signal to OFC that learns and updates task state representations. As part of this planned research, I will be trained in advanced modeling and neural analysis techniques, and complete a course of study on the use of deep neural networks. This training will take place under the guidance of Dr. Stefano Fusi and Dr. C Daniel Salzman in the Zuckerman Mind Brain Behavior Institute at Columbia University.
Despite broad relevance to understanding the mind, how people are able to learn and adapt to changes in their environments remains unknown. Numerous psychiatric dysfunctions as well as healthy neurotypical processes result in deficient ability to learn and update representations of complex, real-world situations. We will use electrophysiological techniques in nonhuman primates and advanced computational modeling to begin to investigate how people can so quickly learn and adapt to a complex world.