While artificially intelligent agents have achieved expert-level performance on some specialized tasks, progress on designing agents that are broadly capable---able to reach adequate performance on a wide range of tasks---remains elusive. One major obstacle is that the sensors and actuators required by a general-purpose agent must be very complex, to support all the different tasks it may be required to solve. The resulting complexity makes decision-making much harder and drastically hinders the effectiveness of such agents. By contrast, agents that do only one thing can be given much simpler inputs and outputs that are carefully designed to be low-dimensional, highly informative, and task-relevant; such agents often demonstrate satisfactory performance. This project posits that a key requirement for generally intelligent agents is the ability to autonomously formulate such representations for themselves---as abstactions over their complex sensor and actuator spaces---and plans to design new algorithms to do so. AI systems with this ability could be re-tasked to solve many different problems without modification, rather than requiring substantial (and often prohibitive) engineering effort for each new application.

This project aims to develop new algorithms that enable agents to learn compact, task-specific abstractions of new problems, by combining and extending techniques for discovering high-level actions, discovering perceptual abstractions that support planning with high-level actions, and formally characterizing the complexity and value loss of using those abstractions. The project will: 1) design new algorithms for reward-driven (and therefore task-specific) perceptual- and action-abstraction discovery; 2) enable inter-task abstraction transfer (which avoids having to re-learn abstractions from scratch each time) through new algorithms for learning generalized skills and constructing modular action-perception-abstraction packages, and new theory characterizing the value loss of using such generalized abstractions; and 3) create principled methods for incrementally constructing a library of modular action-perception abstractions and for adaptively recruiting existing action-state abstractions to solve new tasks.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-10-01
Budget End
2024-09-30
Support Year
Fiscal Year
2019
Total Cost
$1,199,684
Indirect Cost
Name
Brown University
Department
Type
DUNS #
City
Providence
State
RI
Country
United States
Zip Code
02912