Intelligent assistive robots have the potential to improve our society in many walks of life: they could help us take care of the elderly and the sick, act as first responders in emergencies, and kickstart extra-planetary exploration. However, today's robots require highly trained experts for their customization, configuration, and repair. This not only makes it difficult to realize the potential benefits of assistive robots in society, but also creates large uncertainties in the future of employment for millions in the workforce. To help address such issues, this project develops new ways for robots to explain their actions to humans, considering the proficiency of the users. Thus, robots will be able to tailor their explanations to what someone already understands about the robots' capabilities and limitations.
The project focuses on automatically explaining unexpected robot behavior to users with potentially imprecise knowledge about the underlying task and/or the robot. This can be used to efficiently diagnose specification problems and customize robots towards desired behaviors. The proposed approach formalizes three general principles: 1) customizing explanations according to the audience; 2) treating explanation as an interactive process; and 3) using the questions asked of a robot to estimate the user's level of expertise. In this framework, a user may present a foil, or a counterfactual proposal of alternative robot behavior, that s/he finds more natural. The proposed approach estimates the user's proficiency using a lattice of abstract models and computes reasons why the proposed alternatives would not work using minimal additional detail. In this way, the system can produce explanations that are contrastive and aligned with the proficiency of the user.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.