When people communicate with each other about spatially oriented tasks, they more often use qualitative spatial references (such as "behind" in the spatial description "Your eyeglasses are behind the lamp.") rather than precise quantitative terms. Although natural for people, such qualitative references are problematic for robots that "think" in terms of mathematical expressions and numbers. Yet, providing robots with the ability to understand and communicate with these spatial references has great potential for creating a more natural interface mechanism for robot users. This would allow users to interact with a robot much as they would with another human, and is especially critical if robots are to provide assistive capabilities in unstructured environments occupied by people. This project will do the following: empirically capture and characterize the key components of spatial descriptions that indicate the location of a target object in a 3D immersive task embedded in an eldercare scenario; develop and refine algorithms that enable the robot to produce and comprehend descriptions containing these empirically determined key components within this scenario; and assess and validate the robot spatial language algorithm in virtual and physical environments. This project will train graduate students in an interdisciplinary setting that encompasses psychology, computer science and engineering), and will directly involve undergraduate students in the robotics work at Missouri and in the human subject experimentation work at Notre Dame. This project will lead to a better understanding of how robots can and should be used for this class of assistive tasks in an eldercare scenario.

Project Report

The goal of this project was to investigate the use of natural spatial language for the purpose of directing an assistive mobile robot that could be situated in a home setting. The test case was a "find the object task" with spatial descriptions, e.g., a user telling the robot to "find the keys on the table next to the couch in the living room." We were particularly interested in studying whether older adults use spatial language differently than younger adults. We began the study with a series of human subject experiments on both younger adults and older adults, and captured their language when speaking to another person and when speaking to a robot. The results showed that older adults used different spatial language compared to younger adults. Older adults used fewer words overall, fewer spatial units, and fewer reference objects. Older adults referred to room labels more often. Older adults also used fewer modifiers in their language. Older adults used different language when speaking to the robot, compared to speaking to another person including using a different perspective. Both older and younger adults used furniture items as reference objects. These results have implications on the design and development of a human-robot interface for elderly users, both in the language interface and the perception capabilities that would be needed to perform the task. As a result, we developed furniture recognition capabilities so that a robot could recognize a table, a couch, a bed, and so forth that might be used in the "find" operation. We tested speech recognition using a commercially available platform, again testing on both younger adults and older adults. The results showed that the recognition rate was higher for younger adults compared to older adults by about 10%. The recognition rate for younger male voices was higher than younger female voices. However, the recognition rate for older male voices was lower than older female voices. We also developed a system to process the spatial language and translate it into robot commands. And we tested the robot both in a simulated environment and in a real environment. We further tested the robustness of the method in a real environment by moving the furniture items somewhat from the original positions used for the descriptions. Even in this case, the robot was able to find the right location with a success rate of 78%.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
1017097
Program Officer
Ephraim Glinert
Project Start
Project End
Budget Start
2010-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2010
Total Cost
$499,512
Indirect Cost
Name
University of Missouri-Columbia
Department
Type
DUNS #
City
Columbia
State
MO
Country
United States
Zip Code
65211