Humans and animals learn to effectively select actions based on past experience. One particular form of reinforcement learning that involves learning from errors in predicting rewards has provided parsimonious explanations for a broad range of learning phenomena. Such models have also provided some insights into the biological machinery involved in this process. Dopamine neurons projecting to the striatum are thought to encode a reward prediction error that is used to train neurons in striatum to reflect the value o a particular action in a particular state. While traditional reinforcement learning models are both simple and effective, they fail to capture at least one striking aspect of human learning behavior: that people learn more from some errors than from others. In particular, people tend to be more influenced by errors so salient as to suggest a context change or ones that occur during a moment of uncertainty. This behavior is well described by abstract statistical models of optimal inference, but the mechanisms by which it could be implemented in the brain remain unknown. Here I examine a potential mechanism by which this rational adjustment of learning might be implemented in the brain: anterior cingulate cortex (ACC), an area of the brain important for behavioral updating, might represent the current context and relay this information to neurons in the striatum encoding action values. By representing a new context after a salient error, ACC may drive the activation of a new set of striatal neurons, thereby discarding the irrelevant information gleaned in the previous context and speeding learning. While such a system allows for rational adjustments in learning, it would require very fine tuned control over the maintenance and discarding of context representations in ACC. One potential mechanism by which this fine tuning might be achieved depends on tonic (persisting) dopamine levels in ACC. Higher tonic dopamine levels are thought to improve network stability which, in ACC, might lead to stable context representations and a rate of learning that is optimized for stable environments. The goal of this proposal is to provide me with training in computational modeling, human EEG measurements, and behavioral pharmacology. This training allows me to test the hypothesis that dopaminergic neuromodulatory systems and networks in ACC serve complementary roles in adjusting influence of outcomes on future actions through two specific Aims.
The first Aim will examine whether feedback locked EEG responses emanating from ACC reflect rational adjustments of learning, predict behavioral updating, and are consistent with changes to a context representation.
The second Aim will examine whether pharmacologically increasing cortical dopamine levels slows learning and mitigates feedback locked EEG responses.

Public Health Relevance

This project will examine how dopamine levels in prefrontal regions of the brain affect the influence of unexpected events on neural circuits and behavior. The results from this project are likely to change the way that we think about mental disorders such as ADHD and schizophrenia, which are thought to involve an imbalance of dopamine signaling and might even provide insights into how various symptoms of these disorders could be treated.

Agency
National Institute of Health (NIH)
Institute
National Institute of Mental Health (NIMH)
Type
Postdoctoral Individual National Research Service Award (F32)
Project #
5F32MH102009-03
Application #
9142356
Study Section
Special Emphasis Panel (ZRG1)
Program Officer
Desmond, Nancy L
Project Start
2014-09-01
Project End
2017-08-31
Budget Start
2016-09-01
Budget End
2017-08-31
Support Year
3
Fiscal Year
2016
Total Cost
Indirect Cost
Name
Brown University
Department
Social Sciences
Type
Schools of Arts and Sciences
DUNS #
001785542
City
Providence
State
RI
Country
United States
Zip Code
Nassar, Matthew R; Helmers, Julie C; Frank, Michael J (2018) Chunking as a rational strategy for lossy data compression in visual working memory. Psychol Rev 125:486-511
Krishnamurthy, Kamesh; Nassar, Matthew R; Sarode, Shilpa et al. (2017) Arousal-related adjustments of perceptual biases optimize perception in dynamic environments. Nat Hum Behav 1:
Nassar, Matthew R; Bruckner, Rasmus; Eppinger, Ben (2016) What do we GANE with age? Behav Brain Sci 39:e218
Nassar, Matthew R; Bruckner, Rasmus; Gold, Joshua I et al. (2016) Age differences in learning emerge from an insufficient representation of uncertainty in older adults. Nat Commun 7:11609
Nassar, Matthew R; Frank, Michael J (2016) Taming the beast: extracting generalizable knowledge from computational models of cognition. Curr Opin Behav Sci 11:49-54
Jepma, Marieke; Murphy, Peter R; Nassar, Matthew R et al. (2016) Catecholaminergic Regulation of Learning Rate in a Dynamic Environment. PLoS Comput Biol 12:e1005171