This research involves collaboration among investigators at three institutions. The PIs anticipate a future in which humans and intelligent robots will collaborate on shared tasks. To achieve this vision, a robot must have sufficiently rich knowledge of the task domain and that knowledge must be usable in ways that support effective communication between a human and the robot. Navigational space is one of the few task domains where the structure of the knowledge is sufficiently well understood for a physically-embodied robot agent to be a useful collaborator, meeting genuine human needs. In this project, the PIs will develop and evaluate an intelligent robot capable of being genuinely useful to a human, and capable of natural dialog with a human about their shared task.

The Hybrid Spatial Semantic Hierarchy (HSSH) is a human-inspired multi-ontology representation for knowledge of navigational space. The spatial representations in the HSSH provide for efficient incremental learning, graceful degradation under resource limitations, and natural interfaces for different kinds of human-robot interactions. Speech is a natural though demanding way to use natural language to communicate with a robot. To maintain real-time performance, natural language understanding must be organized to minimize the amount of backtracking from early conclusions in light of later information. This project will answer three scientific questions.

(1) Can the HSSH framework, extended with real-time computer vision, express the kinds of knowledge of natural human environments that are relevant to navigation tasks? (2) Can the HSSH representation support effective natural language communication in the spatial navigation domain? 3) Can we develop effective human-robot interaction that meets the needs of a person and improves the performance of the system?

To these ends, the PIs will perform this research with two different kinds of navigational robots, each learning from its travel experiences and building an increasingly sophisticated cognitive map: an intelligent robotic wheelchair which carries its human driver to desired destinations, and a telepresence robot that transmits its perceptions to a remote human driver as it navigates within an environment so the driver can achieve virtual presence and communicate with others remotely. To inform the design process, the PIs will conduct focus groups with potential users. They will also evaluate their implemented systems throughout the process, creating an iterative design-test cycle.

Broader Impacts: To be successful, an intelligent robot must not only be able to perceive the world, represent what it learns, make useful inferences and plans, and act effectively. It must also be able to communicate effectively with other agents, and particularly with people. This confluence among grounded knowledge representation, situated natural language understanding, and human-robot interaction is intellectually fundamental, and is the focus of this research. Since the domain of spatial knowledge is foundational for virtually all aspects of human knowledge, project outcomes will have broad applicability. This work will create technologies for mobility assistance for people with disabilities in perception (blindness or low vision), cognition (developmental delay or dementia), or general frailty (old age). It will also support telepresence applications such as telecommuting, telemedicine and search and rescue. The project includes outreach to K-12 and community college students, K-12 teachers, and the public in a number of venues.

Project Report

We anticipate a future in which humans and intelligent robots will collaborate on shared tasks. To achieve this vision, a robot must have sufficiently rich knowledge of the task domain and that knowledge must be usable in ways that support effective communication between the human and the robot. Navigational space is one of the few task domains where the structure of the knowledge is sufficiently well understood for a physically-embodied robot agent to be a useful collaborator, meeting genuine human needs as in the case of a robotic wheelchair. According to the 2011 census, the US Centers for Disease Control estimates approximated 3.4 million Americans are living with a mobility disability and roughly half of these require a wheelchair. And even for those who are able to self propel their wheelchair, the ability to have a robotic wheelchair, particularly as they age, would provide a key benefit. The goal of this project was to enable natural verbal human-robot interactions with a robotic wheelchair which allow human users to instruct the robot in natural language rather than using a joy stick control interface. We developed several novel algorithms that support various aspects of natural language understanding necessary for such a robotic platform, from integrating visual information with natural language, to resolving references to locations unknown to the robot based on its map, to understanding requests that are hidden in indirect speech acts. We also developed a data set from human subject experiments that can be used to further study the interaction between human gestures and human natural language interactions when humans give route instructions sitting in the wheelchair. Both the algorithms and the empirical datasets are prerequisite for developing the next generation of intelligent autonomous wheelchairs that can be instructed verbally in natural ways.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1111323
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2011-08-15
Budget End
2014-07-31
Support Year
Fiscal Year
2011
Total Cost
$385,310
Indirect Cost
Name
Tufts University
Department
Type
DUNS #
City
Boston
State
MA
Country
United States
Zip Code
02111