It is often assumed that the use of robots to help people execute tasks will result in better performance than if the person or robot were operating alone. However, research in automated systems suggests that the performance of a human-machine system depends on the extent to which the person trusts the machine and the extent to which this trust (or distrust) is justified. As robots are being developed to aid people with complex tasks, it is critical not only that we build systems which people can trust, but that these systems also foster an appropriate level of trust based on the capabilities of the systems. A user who does not have an appropriate level of trust in the robot may misuse or abuse the robot's autonomous capabilities or expose people to danger. This project proposes to develop quantitative metrics to measure a user's trust in a robot as well as a model to estimate the user's level of trust in real time. Using this information, the robot will be able to adjust its interaction accordingly.

Promoting appropriate levels of trust will be particularly beneficial in safety-critical domains such as urban search and rescue and assistive robotics, in which users risk harm to themselves, the robot, or the environment if users do not trust the robot enough to rely on its autonomous capabilities. The research has the potential for a large impact on the field of human-robot interaction as few studies have explicitly examined issues involving trust of robots. Being able to model trust and foster appropriate levels of trust will result in more effective use of robotic automation, safer interactions, and better task performance.

Project Report

This project, in collaboration with Aaron Steinfeld's lab at Carnegie Mellon University (IIS-0905148), explored how robot actions during use influenced human trust in the robot. The team focused heavily on non-social tasks and the variations in robot autonomy, performance, and interface design. Our collaborative team developed a new metric for measuring an operator’s real time trust in a robot system, validated existing trust metrics for software systems for use in robotics, discovered that status feedback can improve trust of a robot system but increases operator workload, learned that decreasing the operator’s situation awareness increases their trust of the robot system, found that system failures have a direct and immediate influence upon a person’s trust of the robot system, and constructed a model of the factors that influence an operator’s trust in a remotely operated robot system. The team also investigated how the behaviors of robot systems impact bystanders. Aspects of the work were included in our related research, allowing the expansion to other automated system domains such as autonomous cars and medical diagnosis systems to investigate the common and differing factors between these domains. Side projects included an examination of the connection between perceived robot malfunctions and deceptive robot behaviors, how robots influence human honesty, and human willingness to blindly accept robot advice. Key outputs for this work include validation of existing trust measures and models within the context of human-robot interaction, development and validation of new measures and models, new methodologies, and improvement of existing research testbeds and systems. The team published 19 papers, two of which are slated for publication in 2015. The team also had significant impact on professional capacity through integration of research results into numerous classes and significant participation by numerous students. Combined, the two sites involved one completed PhD degree at UML, 2 PhD students in progress at UML, 3 completed MS degrees at CMU, 2 completed MS degrees at UML, 10 undergraduate students at CMU, 13 undergraduate students at UML, and 1 high school student at UML. Of the 23 undergraduate students who worked on the project, 4 went on to graduate programs (2 at each site). Finally, the team has extensively disseminated results to classes, industry, K-12 students and teachers, and through numerous professional organizations and conferences. The two PIs were also the General Co-Chairs of the 2012 ACM/IEEE International Conference on Human-Robot Interaction.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Type
Standard Grant (Standard)
Application #
0905228
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2009-09-01
Budget End
2014-08-31
Support Year
Fiscal Year
2009
Total Cost
$560,000
Indirect Cost
Name
University of Massachusetts Lowell
Department
Type
DUNS #
City
Lowell
State
MA
Country
United States
Zip Code
01854