Humans learn how to make rewarding choices in multiple ways, often in parallel. Some learning mechanisms are fast and flexible, but mentally effortful; others are slow and inflexible, but effortless, learning the value of different choices in different situations. This project investigates the possibility that the mechanisms are not independent. Specifically, the investigators test the hypothesis that effortful processes contribute to identifying what the slow and effortless “reinforcement learning” processes learn the value of: what features of the situation are relevant, what aspects of the choices matter. In that sense, the simpler reinforcement learning process, usually thought to be automated and instinctive, may be improved by the exertion of cognitive effort such as attention and short term memory, and impaired by a lack thereof. Better understanding the role of cognitive effort in effortless reinforcement learning processes should strengthen our ability to identify sources of learning impairment and optimize learning in the many everyday life situations where we need to learn, be it new software, parenting, or interacting with others in new environments. Better understanding of the mechanisms that support human learning will provide inspiration for improved algorithms for artificial intelligence.

Learning in humans is the result of a carefully orchestrated set of processes interacting in parallel. Some processes, like working memory, rely on executive function to store information in a flexible format that is effortful to maintain and use. Other processes, like reinforcement learning, store information in a less flexible but more robust and virtually effortless format, encoding the value of choices. In this project, the investigators study how executive functions may additionally support reinforcement learning processes. This project uses novel experimental protocols to examine how weakening executive functions affects learning, and apply novel computational models to disentangle the learning processes. A goal is to establish a computational architecture to explain how executive function supports reinforcement learning processes. This project will significantly advance our understanding of the computational mechanisms that underly learning in humans. The project will highlight the importance of considering how different processes that contribute to learning interact, and the fact that even learning processes considered to be mostly automated depend on “intelligent” executive functions. This project has important broader implications to learning in everyday life as well as to the use of artificial intelligence in technological advances. Future findings could help design more effective pedagogical approaches and lead to more adaptive and individualized teaching, impacting many domains where learning is essential, including education, public health, software design, with significant implications for individuals with learning impairments. Several young scientists will be trained during this project, in particular in highly sought computational modeling skills.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Application #
2020844
Program Officer
Soo-Siang Lim
Project Start
Project End
Budget Start
2020-09-01
Budget End
2023-08-31
Support Year
Fiscal Year
2020
Total Cost
$248,738
Indirect Cost
Name
University of California Berkeley
Department
Type
DUNS #
City
Berkeley
State
CA
Country
United States
Zip Code
94710