Neural and computational mechanisms of selective attention in experience-based decision making In order to make correct decisions, we must learn from our past experiences. Learning has long been conceptualized as the formation of associations between stimuli and outcomes. But how should we define these "stimuli" in real-world decision making environments that are complex and multidimensional? It would seem most optimal to learn about all available stimulus features (height, color, shape, etc.). However, in natural environments only few dimensions are relevant to performance of any given task. Attending to and learning about only those dimensions that are relevant to the task at hand (and ignoring all others) improves performance, speeding learning and simplifying generalization to future stimuli that are slightly different. How do we know what dimensions are relevant to a given task, and should be attended to and learned about? Considerable behavioral work in cognitive psychology has explored the dynamics of "attention learning"-how we learn what to attend to-within the context of categorization and concept formation. However, little is known about the neural basis of attention learning, and how attention interacts with implicit trial-and-error reinforcement learning processes. The goal of this project is to study the neural and computational substrates of attention learning in humans, and to understand how attention mechanisms interact with learning mechanisms in the brain. We propose to use a combi- nation of computational modeling, behavioral experiments and functional neuroimaging in order to 1) determine the neural substrates of attention learning in the human brain, 2) track learning-driven changes in attention to different dimensions of a stimulus directly, and 3) establish individual differences in attention for learning separately from attention for decision. The overarching neural hypothesis to be tested is two-fold: we hypothesize that neural mechanisms for reinforcement learning in the basal ganglia operate on an attentionally-filtered representation of the environment that is conveyed to the striatum by fronto-parietal cortical afferents. Moreover, we hypothesize that this attentional filter is dynamically adjusted according to the outcomes of ongoing decisions. Throughout, we will not assume that attention learning consists of one unitary process but rather investigate the possibility that individuals use different strategies to varying extents. In particular, building on our previous research and on findings in the categorization literature, we will focus on two computational strategies for attention learning-a serial hypothesis testing strategy, and a gradually focusing parallel attention strategy-that are differentially indicated in different individuals. Our results will significantly advance the basic scientific understanding of cognitive decision making processes, elucidating the neural mechanisms underlying a critical component of decision making. From a practical perspective, understanding the computational and neural underpinnings of individual differences in attention learning will potentially allow tailoring of learning tasks to different individuals. Moreover, the neural processes underlying attention learning are likely to be involved in clinical disorders such as schizophrenia, attention deficit disorder and drug abuse disorder. In the long term, the proposed research will potentially impact on the study and treatment of these disorders.
The proposed work will make use of an interdisciplinary combination of computational methods with neuroscientific and behavioral data to advance basic scientific knowledge about the interaction between learning and attention in everyday decision-making scenarios. From a broad perspective, our results will not only shed light on basic principles of decision making, but will also have implications for attention-related disorder such as schizophrenia and attention-deficit disorder and for tailoring learning and decision-making tasks for specific individuals.
|Niv, Yael; Langdon, Angela (2016) Reinforcement learning with Marr. Curr Opin Behav Sci 11:67-73|
|Arkadir, David; Radulescu, Angela; Raymond, Deborah et al. (2016) DYT1 dystonia increases risk taking in humans. Elife 5:|
|Schuck, Nicolas W; Cai, Ming Bo; Wilson, Robert C et al. (2016) Human Orbitofrontal Cortex Represents a Cognitive Map of State Space. Neuron 91:1402-12|
|Takahashi, Yuji K; Langdon, Angela J; Niv, Yael et al. (2016) Temporal Specificity of Reward Prediction Errors Signaled by Putative Dopamine Neurons in Rat VTA Depends on Ventral Striatum. Neuron 91:182-93|
|Schuck, Nicolas W; Gaschler, Robert; Wenke, Dorit et al. (2015) Medial prefrontal cortex predicts internally driven strategy shifts. Neuron 86:331-40|
|Niv, Yael; Daniel, Reka; Geana, Andra et al. (2015) Reinforcement learning in multidimensional environments relies on attention mechanisms. J Neurosci 35:8145-57|
|Niv, Yael; Langdon, Angela; Radulescu, Angela (2015) A free-choice premium in the basal ganglia. Trends Cogn Sci 19:4-5|
|Wilson, Robert C; Niv, Yael (2015) Is Model Fitting Necessary for Model-Based fMRI? PLoS Comput Biol 11:e1004237|
|Soto, Fabian A; Gershman, Samuel J; Niv, Yael (2014) Explaining compound generalization in associative and causal learning through rational principles of dimensional generalization. Psychol Rev 121:526-58|
|Wilson, Robert C; Takahashi, Yuji K; Schoenbaum, Geoffrey et al. (2014) Orbitofrontal cortex as a cognitive map of task space. Neuron 81:267-79|
Showing the most recent 10 out of 16 publications