AI planning algorithms have typically ignored issues involving what to do if the agent has incomplete or incorrect information about the world. Probabilistic extensions to classical planners can quantify the agent's state of information, but have not addressed the question of when the agent should act to improve its state of information, how it should do so, and what it should do upon receiving new information. This project further extends planning algorithms to allow the building of plans with information-gathering actions, contingent execution, and iteration. The approach is a synthesis of traditional symbolic-planning representations and algorithms with a Bayesian- decision-theoretic model of uncertainty and information gathering. The project also explores the connection between the model and related work in the decision sciences and stochastic optimization (particularly existing work on partially observable Markov decision processes). The project's empirical component undertakes to develop control strategies that allow probabilistic plans with feedback to be constructed efficiently. The model is applied to problems in automated manufacturing and medical decision making. This research project will develop effective algorithms for planning under uncertainty while accommodating the ability to sense and act on new information about the world, thus addressing a fundamental limitation in classical planning algorithms and allowing existing planning technology to be applied to a significantly wider range of application domains.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
9523649
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
1996-04-15
Budget End
2001-06-30
Support Year
Fiscal Year
1995
Total Cost
$265,289
Indirect Cost
Name
University of Washington
Department
Type
DUNS #
City
Seattle
State
WA
Country
United States
Zip Code
98195