The goal of this project is to develop machine learning algorithms that can enable automated decision making and control in applications that require autonomous agents to interact with the real world. In particular, the project will examine two application areas: autonomous robots and educational agents that interact with human students to facilitate learning. The principal technical development investigated in this project will center around applications of deep neural networks (deep learning) to efficiently learn predictive models of the world, such as the physical environment of the robot or the behavior of a human student using an interactive educational agent. Deep learning has enabled impressive advances in passive perception domains such as computer vision and speech recognition, but typically requires very large amounts of data to succeed. This is often a major challenge in interactive settings, where a robot cannot interact with its environment for weeks or months just to learn a single behavior. To address this challenge, this project will investigate how predictive models can be transferred from prior tasks into a new task. The technologies developed as part of this project could enable substantially more sophisticated autonomous systems that can adapt quickly to new situations through transfer. Economic impact could include new consumer robotics products and improved education through intelligent automation.

Reinforcement learning holds the promise of automating complex decision making and control in the presence of uncertainty. For a wide range of real-world problems, from robotic control and autonomous vehicles to interactive educational tools, this would provide dramatic improvements in capability and reduction in engineering cost. However, applying reinforcement learning to complex, unstructured environments and real-world problems with raw inputs, such as images and sounds, remains tremendously difficult. Deep learning has shown a great deal of promise for tackling complex learning problems, especially ones that require parsing high-dimensional, raw sensory signals, but the most successful applications of deep learning use very large amounts of labeled data. This is at odds with the demands of reinforcement learning, where the goal is typically to learn an effective policy using the minimal amount of interaction. This projects aims to address this challenge by developing algorithms for model-based deep reinforcement learning, where a generalizable model is learned from past experience on related but different tasks, and then transferred to a new task to learn it very quickly, directly using raw sensory inputs.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1614653
Program Officer
Weng-keen Wong
Project Start
Project End
Budget Start
2016-09-01
Budget End
2016-11-30
Support Year
Fiscal Year
2016
Total Cost
$479,279
Indirect Cost
Name
University of Washington
Department
Type
DUNS #
City
Seattle
State
WA
Country
United States
Zip Code
98195