For almost one million American adults living with physical disabilities, picking up a bite of food or pouring a glass of water presents a significant challenge. Wheelchair-mounted robotic arms -- and other physically assistive devices -- hold the promise of increasing user autonomy, reducing reliance on caregivers, and improving quality of life. Unfortunately, the very dexterity that makes these robotic assistants useful also makes them hard for humans to control. Today's users must teleoperate their assistive robots throughout entire tasks. For instance, when users control an assistive robot for eating, they would need to carefully orchestrate the position and orientation of the end-effector to move a fork to the plate, spear a morsel of food, and then guide the food back towards their mouth. These challenges are often prohibitive: users living with disabilities have reported that they choose not to leverage their assistive robot when eating because of the associated difficulty. The key insight of this project is that controlling high-dimensional robots can become easier by learning and leveraging conventions, which enable users to convey their intentions, goals, and plans to the robot using simple and low-dimensional inputs.

The goal of this project is to study convention formation for human-robot interaction. Conventions define a relationship between the everyday actions and the latent meanings that these actions embody. This project will advance the state-of-the-art of robotics from an algorithmic perspective: i) developing new algorithms that enable learning conventions developed between humans and robots through repeated interactions, ii) leveraging these conventions to develop more intuitive, consistent, and controllable interfaces for teleoperating robots with high degrees of freedom, iii) shared autonomy algorithms that blend autonomous actions based on the learned conventions, and iv) extending state-of-the-art shared autonomy techniques to positively influence conventions over time. In addition, the proposed shared autonomy and teleoperation algorithms will be extensively evaluated through human subject studies. This can advance the state of teleoperation in domestic robotics; in tasks such as feeding or cooking.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Project Start
Project End
Budget Start
2020-10-01
Budget End
2023-09-30
Support Year
Fiscal Year
2020
Total Cost
$500,000
Indirect Cost
Name
Stanford University
Department
Type
DUNS #
City
Stanford
State
CA
Country
United States
Zip Code
94305