This research involves collaboration among investigators at three institutions. The PIs anticipate a future in which humans and intelligent robots will collaborate on shared tasks. To achieve this vision, a robot must have sufficiently rich knowledge of the task domain and that knowledge must be usable in ways that support effective communication between a human and the robot. Navigational space is one of the few task domains where the structure of the knowledge is sufficiently well understood for a physically-embodied robot agent to be a useful collaborator, meeting genuine human needs. In this project, the PIs will develop and evaluate an intelligent robot capable of being genuinely useful to a human, and capable of natural dialog with a human about their shared task.
The Hybrid Spatial Semantic Hierarchy (HSSH) is a human-inspired multi-ontology representation for knowledge of navigational space. The spatial representations in the HSSH provide for efficient incremental learning, graceful degradation under resource limitations, and natural interfaces for different kinds of human-robot interactions. Speech is a natural though demanding way to use natural language to communicate with a robot. To maintain real-time performance, natural language understanding must be organized to minimize the amount of backtracking from early conclusions in light of later information. This project will answer three scientific questions.
(1) Can the HSSH framework, extended with real-time computer vision, express the kinds of knowledge of natural human environments that are relevant to navigation tasks? (2) Can the HSSH representation support effective natural language communication in the spatial navigation domain? 3) Can we develop effective human-robot interaction that meets the needs of a person and improves the performance of the system?
To these ends, the PIs will perform this research with two different kinds of navigational robots, each learning from its travel experiences and building an increasingly sophisticated cognitive map: an intelligent robotic wheelchair which carries its human driver to desired destinations, and a telepresence robot that transmits its perceptions to a remote human driver as it navigates within an environment so the driver can achieve virtual presence and communicate with others remotely. To inform the design process, the PIs will conduct focus groups with potential users. They will also evaluate their implemented systems throughout the process, creating an iterative design-test cycle.
Broader Impacts: To be successful, an intelligent robot must not only be able to perceive the world, represent what it learns, make useful inferences and plans, and act effectively. It must also be able to communicate effectively with other agents, and particularly with people. This confluence among grounded knowledge representation, situated natural language understanding, and human-robot interaction is intellectually fundamental, and is the focus of this research. Since the domain of spatial knowledge is foundational for virtually all aspects of human knowledge, project outcomes will have broad applicability. This work will create technologies for mobility assistance for people with disabilities in perception (blindness or low vision), cognition (developmental delay or dementia), or general frailty (old age). It will also support telepresence applications such as telecommuting, telemedicine and search and rescue. The project includes outreach to K-12 and community college students, K-12 teachers, and the public in a number of venues.