This project explores the behavior of value-based learning methods in multi-agent environments. Value-based methods make decisions by using experience to estimate the utility impact of alternatives and choosing those with high predicted value. Because they evaluate components of behavior instead of treating behaviors as atomic units, they are computationally and statistically efficient. While these methods have been used in computational experiments for many years, only recently have researchers begun to formally characterize their behavior. Our own preliminary work is finding that some value-based methods exhibit super-Nash behavior, making them particularly worthy of study.

More specifically, we are analyzing, mathematically and experimentally, how value-based algorithms perform in several classes of simulated games of varying complexity from the artificial intelligence community, multi-agent engineering applications drawn from the wireless networking area, and as models of human and animal decision making in collaboration with cognitive neuroscientists. Where possible, we are refining existing value-based algorithms to work more efficiently, robustly, and generally than existing algorithms. We are also designing educational outreach activities, including creating entertaining instructional videos on how to promote cooperative behavior in real-life social dilemmas.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1414935
Program Officer
Hector Munoz-Avila
Project Start
Project End
Budget Start
2013-07-01
Budget End
2016-01-31
Support Year
Fiscal Year
2014
Total Cost
$157,019
Indirect Cost
Name
Brown University
Department
Type
DUNS #
City
Providence
State
RI
Country
United States
Zip Code
02912