This proposal addresses a central problem in the creation of autonomous intelligent agents: the development of a theory of what constitutes reasonable conclusions and decisions made on the basis of information supplied to the agents. Such conclusions have long been recognized to be not purely deductive in nature, that is, the conclusions are not necessarily logically entailed by the information supplied; rather they are defeasible in the sense that the conclusions might need to be withdrawn in the face of new information. Since real-world environments in which intelligent autonomous agents are presumed to operate do not provide complete information, a theory of defeasible reasoning is necessary for supporting agent planning. Current theories of defeasible reasoning are not entirely adequate for the task of building such agents, and this proposal aims to address this problem in two ways: (1) construct and implement a theory of defeasible reasoning suitable for autonomous agents; and (2) employ that system of reasoning as an inference engine to support decision-theoretic planning by agents. This project takes a hybrid approach that combines symbolic and probabilistic techniques. The research builds on the PI's ongoing OSCAR project that is aimed at developing autonomous cognitive agents in general. This project focuses on the particular problem of decision-theoretic planning, a problem relevant to a wide range of contexts such as exploration of space by autonomous Mars-style rovers.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0412791
Program Officer
Douglas H. Fisher
Project Start
Project End
Budget Start
2004-06-01
Budget End
2008-05-31
Support Year
Fiscal Year
2004
Total Cost
$404,977
Indirect Cost
Name
University of Arizona
Department
Type
DUNS #
City
Tucson
State
AZ
Country
United States
Zip Code
85721