Robots have great potential societal benefits, especially working with humans in tasks such as manufacturing, disaster relief and elder care. Robots are however very difficult to program to perform new tasks: non-programmers can teach relatively stereotyped action sequences and expert programmers can generate more elaborate action strategies through long programming and debugging processes. Part of the difficulty stems from trying to teach the robot at the level of actions, since the actions to achieve a desired effect depend strongly on details of the environment. Instead, this project focuses on teaching the robot models of the environment. The robot can then use these models to plan its actions automatically. This approach leads to more adaptable behavior. Models are also easier to extend and re-use than action sequences, thereby reducing the burden for teaching subsequent tasks. The project involves a thorough integration of research and education. Graduate and undergraduate students are involved in all aspects of the research. Furthermore, the research in this project will become part of an undergraduate subject on robot algorithms at MIT.
This project will develop techniques to teach a robot to perform long-horizon tasks in complex, uncertain domains, in a way that equips the robot with knowledge it can re-use and re-combine with previous knowledge to solve not just the task it was taught, but a broad array of additional tasks. Furthermore, the robot will be aware of its own knowledge and lack of knowledge, and will be able to plan to take actions, including performing experiments and asking humans for further information, to improve its own knowledge about how to behave in its environment. The project will develop a set of machine learning tools that will allow humans to, relatively quickly and straightforwardly, teach the basic ideas of a new domain to the robot, and then enable to robot to continue to improve its knowledge as it gains experience in the domain. This project will build on a new hierarchical framework for integrating robot motion planning, symbolic planning, purposive perception and decision-theoretic reasoning. The framework, as it stands, supports planning and execution to achieve pick-and-place tasks in complex domains that may require moving objects out of the way, using real, noisy, robot perception and actuation. However, it requires a specification of the domain it is to operate in. In our existing implementation, the domain description was written by hand, by experts, through a long period of trial-and-error. The concrete objective of the project is to develop methods enabling a robot to learn to perform high-level tasks in new domains by acquiring new domain models through human-provided examples and advice. These methods will be evaluated in three domains using a Willow Garage PR2 mobile manipulation robot. The overriding objective will be to develop methods that apply broadly and can be used to instruct robots to perform a wide variety of tasks.