This project is an effort to create a unified framework for solving very large problems with uncertain states and actions, such as manipulator robots acting in real-world environments. The results may have especially great promise for assistive technologies, including autonomous robots that can be used by elderly and disabled populations to aid them in their daily activities. The proposed integrated framework will represent, apply, and learn hierarchical domain knowledge, and will include the ability to transfer knowledge from simpler problems to more complex ones. The research will enable autonomous agents to develop a structured representation of complex domains based on experience. The agents will use learned representations to interpret natural language commands for both low-level and high-level requests.
The technical focus is enabling tractable planning in large, uncertain domains by generating and leveraging probabilistic domain knowledge at multiple levels of abstraction. Agents will autonomously create layered representations in which the layers build on one another to produce complex behaviors. Agents will learn to perform useful behaviors, such as navigating using low-level sensor feedback or assembling complex objects such as a bridge or a table. The key technical contributions will be methods for (1) planning in large state/action spaces using the abstract object-oriented Markov decision process (AMDP) model, a new formalism for representing probabilistic domain knowledge at multiple levels of abstraction; (2) learning hierarchical task knowledge in the form of AMDPs; and (3) interpreting natural language commands at multiple levels of abstraction by mapping to the learned hierarchical structure. The formalism will be demonstrated and validated in several domains, including a simulated "cleanup" toy domain, challenging and complex video games, and a robot manipulation task.