For robots to become ubiquitous collaborators, they must interact physically with objects in unstructured human environments. Robots must adeptly grasp, push, squeeze, snap, balance, stabilize and hand-off objects, and must effectively communicate about these actions with their human collaborators. People must not only be able to specify to a robot what to do and how to do it, they must also be able to interpret what a robot intends to do (and how). Effective communication about physical interactions will be essential if robots are going to perform such tasks as delivering care to older adults, responsively helping technicians in performing repairs, or being trained by non-experts to perform repetitive assembly tasks. This project will enable a new generation of robot applications, and advance the vision of ubiquitous, collaborative robots by developing better methods for robots to more effectively communicate with people.
The project will address key challenges in communication about physical interactions: conveying invisible and unfamiliar quantities (e.g., forces and compliances), communicating plans and contingencies, and communicating about what did not (or should not) happen. The project will involve three key challenges: specification, interpretation, and monitoring. To address these challenges, the project will 1) perform formative studies to gain insight into how people communicate about physical interactions and interpret displays; 2) develop methods for specifying physical actions based on the idea of augmented demonstrations, methods for interpreting physical actions based on the idea of interpretable representations, and methods for monitoring physical actions based on multimodal communication; and 3) deploy these ideas in prototype systems for contextualized scenarios for evaluation.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.