We are the victims of our own success. We can now deploy mobile robots in real-world environments and have them operate completely autonomously for extended periods of time. We no longer have to surround our robots with graduate student wranglers to keep them functional, and to keep the general public at a safe distance. These technical successes mean that members of the general public must now interact directly with robots, without the aid of an interpreter. But members of the public are poorly equipped for such interactions, since they are unfamiliar with real robots and how they work. Thus, the interactions often go poorly; the robot is hindered in performing its task, and the human is unhappy. For people to be comfortable interacting with a robot, they must feel that they understand what it's thinking, what it's trying to do, and the actions that it will take. Moreover, people must be able to deduce this information from observing the robot for a short period of time, just as we do with other humans that we encounter. The fundamental problem here is that humans communicate a wealth of information by means of a non-verbal "vocabulary" in which body language (how we stand, how we hold our arms, etc.), eye contact, nods, and other subtle cues ostensibly not essential to the task at hand play significant roles. We do this naturally, and without conscious effort. Taken in context, this information allows us to infer another person's state of mind, goals, and intentions with surprising accuracy; this, in turn, allows us to predict how a given interaction will unfold, and gives us some control over it. Because people take this ability for granted, they suffer when it is absent, as is currently often the case when interacting with a mobile robot. The PI intends to address this deficiency in the current project. He argues that to make human-robot interactions as natural as possible, we must equip robots with our physical vocabulary and ensure that they use it appropriately, following social norms. To achieve this goal the PI will turn to the performing arts, where actors are trained to express themselves physically. A good actor can convey a vast amount of information about a character's state of mind, goals, and intentions by simply walking across the stage in a particular way. The actions may be styled, larger-than-life, or subtle, but they are intended to convey information about the character's internal mental state. The techniques that actors employ have been honed and refined for hundreds of years and tested for effectiveness on the general public. In this research, the PI will exploit such insights and skills to develop a physical vocabulary that can communicate beliefs, intentions, and goals to humans interacting with a robot, thereby enabling people to better predict the robot's actions. Finally, the PI will rigorously evaluate these actions to verify that they are actually useful.

Broader Impacts: Robots are becoming more and more a part of our lives, and members of the public will be forced to deal with them sooner or later. If we have an understanding of the physical aspects of these interactions, the integration of robots into our everyday lives will be made much less painful and distressing.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0917199
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2009-09-01
Budget End
2012-11-30
Support Year
Fiscal Year
2009
Total Cost
$495,209
Indirect Cost
Name
Washington University
Department
Type
DUNS #
City
Saint Louis
State
MO
Country
United States
Zip Code
63130