Assistive and service robots have made significant strides and have the potential to be transformative in many fields, including health care, aging, disability management, and work in dangerous environments. For this, however, it is important that these robots can be "programmed" and used by largely untrained users, including caregivers or elderly persons. This project aims to develop a novel approach to allow robots of widely varying designs to perform assistive and supportive tasks by observing demonstrations performed by a human. Rather than copying movement, which would require that the robot resembles the human, the proposed approach uses these demonstrations to "infer" the important aspects of the task and translate them into a strategy that can be executed by the robot and in varying situations and settings.

The proposed approach treats imitation not as copying of observed movements but rather as learning to replicate the function of the demonstration. For this, it transforms observations into a hierarchical Markov task model using learned models of observed environmental dynamics. This probabilistic task model is then mapped onto a hierarchical Semi-Markov Decision model of the robot's behavioral capabilities using an adaptive similarity function that represents the correspondence between attributes in the two models as well as the importance of particular attributes for successful task performance. The cost function is adapted during imitation using Reinforcement Learning and qualitative feedback from the user, allowing the system to improve and personalize its imitation capabilities. This project develops a proof-of-concept system and evaluates it on a wheeled mobile manipulator in the context of common household tasks.

Project Start
Project End
Budget Start
2015-09-01
Budget End
2017-08-31
Support Year
Fiscal Year
2015
Total Cost
$139,968
Indirect Cost
Name
University of Texas at Arlington
Department
Type
DUNS #
City
Arlington
State
TX
Country
United States
Zip Code
76019