This project, trying to break the very high barrier to entry of mobile manipulation, aims at integrating in a robot platform methods drawn from all areas of AI, including machine learning, vision, navigation, manipulation, planning, reasoning, and speech/natural language processing (NLP). The project contemplates building a robot that can navigate home and office environments, that can pick up and interact with objects and tools, and that can intelligently converse with and help people in these environments. Over the long tern, a single robot will perform such tasks as:
-Fetching a book or a person from an office, in response to a verbal request. -Tidying up a space after a party, including picking up and throwing away trash, and placing dirty dishes, and glasses in the dishwasher. -Using multiple tools necessary to assemble, say, a bookshelf. -Showing guest around an active research lab (where things change daily), answering questions, and keeping track of an entire group.
A robot capable of these tasks will revolutionize home and office automation and have important applications ranging from elderly care to machine shop assistants. To realize this vision, the PIs carry out an integrated research program on learning, manipulation, perception, spoken dialog, and reasoning, all in the context of applying them to STAIR (Stanford AI Robot). The proposed computing platform would provide a unified testbed for developing machine learning, vision, navigation, manipulation, planning, reasoning, and NLP.
Broader Impact: The application (robots performing tasks for home and office) itself exhibits broad impact. Additionally, the STAIR project will be used to train more graduate and undergraduate students. A course is being developed exposing students to participate in teams. Women and underrepresented students are actively being sought.