This is the first year of a three-year continuing award. This research investigates task-level planning for autonomous agents, such as mobile robots, that function in an uncertain environment. These robots typically have very approximate, inaccurate, or minimal models ofthe environment. For example, although the geometry of its environment is crucial to determining its performance, a mobile robot might only have a partial, or local "map" of the world. The research will investigate an approach whereby the robot attempts to acquire the necessary information about the world by planning a series of experiments using the robot's sensors and actuators, and building data structures based on the robot's observations of these experiments. A key feature of this approach is that the experiments the robot performs should be driven by the information demands of the task. This research develops (1) a theory of sensor interpretation and task-directed planning using perceptual equivalence classes, intended to be applicable in highly uncertain or unmodelled environments, such as for a mobile robot; (2) algorithmic techniques for modeling geometric constraints on recognizability, and the building of internal representations (such as maps) using these constraints; (3) explicit encoding of the information requirements of a task using a lattice (information hierarchy) of recognizable sets, which allows the robot to perform experiments to recognize a situation or a landmark; and (4) synthesis of robust mobot programs using the geometric constraints, constructive recognizability experiments, and uncertainty models imposed by the task. This research will also implement the theory and test it on mobile robots in the laboratory.