One of the key challenges when studying robotics systems that interact with people is the lack of access to a general model of how humans behave. Humans usually do not follow a fixed stationary model. They change and adapt to each other and to the robots over time. Humans gain experience - people's driving behavior when interacting with an autonomous car will be significantly different after many interactions. In assistive robotics, human responses will change as robots adapt. Routing decisions of autonomous cars influence other human drivers' routing choices and can result in undesirable global properties such as congestion. This introduces a new set of challenges including how robots should plan for safe and reliable strategies that are aware of their effects on people and the society as a whole. This project lays the foundations of analyzing and planning for repeated interactions between humans and robots. Our work will directly impact humans' comfort, safety, and public life by increasing robot understanding in interactions with humans in environments such as homes, hospitals, warehouses, and smart cities.

The goal of this project is to focus on one of the key components of safe and interactive robotics -- formalizing influencing interactions, i.e., robot actions that influence human responses. This requires developing computational models of human behaviors, and leads to better understanding and formalisms for safe and reliable interactions with robots. This project investigates three main challenges: 1) human modeling: the investigator will develop data efficient methods to learn computational models of human behaviors while interacting with autonomous systems. One of the challenges in human-robot interaction is the lack of data from humans. This work develops active learning techniques that intelligently query and integrate different types of data collected from human feedback. 2) influencing interactions: it is clear that humans can be influenced by simple interactions with each other, e.g., people plan to arrive late if they are meeting a friend who is always late. Similarly, people's behavior changes when interacting with robots. If they observe an autonomous car being stuck at an intersection multiple times, they decide to navigate around it. This project plans to design robotics algorithms that are mindful of their effects on humans, how they can change the human behavior, and how that can help the overall system. 3) safe interactions: when planning for interactions that influence people, the robot will rely on learned human models. However, having access to a truly correct and reliable human model can be challenging due to lack of data or insufficient model parameters. This project designs verified robot policies that will be robust to inaccuracies present in human models, and takes the overall system to desirable states over long-term interactions.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
1941722
Program Officer
David Miller
Project Start
Project End
Budget Start
2020-02-01
Budget End
2025-01-31
Support Year
Fiscal Year
2019
Total Cost
$208,046
Indirect Cost
Name
Stanford University
Department
Type
DUNS #
City
Stanford
State
CA
Country
United States
Zip Code
94305