A decade ago, robots typically played the role of “tool” or “teammate”. Today, although there are some clear cases where the “tool” or “teammate” model is appropriate, most collaborative robots in long-term deployments in homes, workplaces, or schools readily switch back and forth between being an agentic “teammate” to an inanimate “tool”. While people tolerate this shift in perceived agency, it is unknown how this shift impacts interpersonal properties that are typically attributed only to agentic “teammate” robots. This project will evaluate factors that affect how people trust robots, recognizing that the way humans “trust” non-agentic automation is fundamentally different from the way that we “trust” agents and agentic robots. What happens to the trust formed while a robot is an agent when it becomes an inanimate tool? What happens when the inanimate tool returns to being an agentic teammate? The proposed research will fill a significant gap in our understanding of how young humans develop trust in robots. While trust in non-agentic robots is well understood, there has been little systematic study of trust in robots that function as collaborative tools. This work has broad applications to future deployment of robots as systems that vary over time between agentic (human-like) and non-agentic (object-like) behavior.

The investigators will concentrate on the role of agency in establishing trust in human-robot interactions in an important application domain: children’s learning. Educational robots designed specifically for children are increasingly common, often replacing human channels of social information. However, these robots cannot be successful without trust; because children are inherently social and collaborative learners, trust is a prerequisite for successful learning. Integrating insights from interactive robot design into experiments with preschool and early school age children, the project will determine how shifts in perceived agency impact the formation, maintenance, and repair of trust: Study 1 investigates how variations of low-level perceptual cues over a single interaction influence trust and subsequent learning. Study 2 examines how variations of high-level social cues lead to differential trust and subsequent learning. For these experiments, a set of age-appropriate collaborative learning games were designed. The investigators also created a coding scheme for child behavior as well as a post-interaction child interview to assess children’s perceptions of the robots and to measure effectiveness of learning. Findings and activities of this project could have broad impacts in multiple arenas including: (1) design guidelines that will influence a broad range of application areas including healthcare, manufacturing, and education; (2) enhancement/augmentation of learning, education and training, including research offerings for graduate and undergraduate investigators; (3) broadening of participation in one area of computing, and (4) dissemination of science to the general public and to the research community.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Behavioral and Cognitive Sciences (BCS)
Type
Standard Grant (Standard)
Application #
1955280
Program Officer
Soo-Siang Lim
Project Start
Project End
Budget Start
2020-09-15
Budget End
2024-08-31
Support Year
Fiscal Year
2019
Total Cost
$367,918
Indirect Cost
Name
Cornell University
Department
Type
DUNS #
City
Ithaca
State
NY
Country
United States
Zip Code
14850