Assistive robots have the potential to transform the lives of persons with upper extremity disabilities, by helping them perform basic daily activities, such as manipulating objects and feeding. However, human control of assistive robots presents substantial challenges. The high dimensionality of robotic arms means that joystick-like interfaces are unnatural hard to use intuitively, and motions resulting from direct teleoperation are often slow, imprecise, and severely limited in their dexterity. This research address these challenges by developing learning algorithms for shared autonomy, where the robot anticipates the user's intent and provides a degree of assistive autonomy to ensure fluid and successful motions. This research will also pave the way for future research that can bootstrap from teleoperation and build towards full robot autonomy.
The research proposes a hierarchical and multi-phased approach to shared autonomy, using techniques from deep learning and reinforcement learning. The system begins by using deep inverse reinforcement learning to quickly ascertain the user's high-level goal, such as whether the user wants to grasp a particular object or operate an appliance, from raw sensory inputs. This goal inference layer supplies objectives to the lower control layer, which consists of deep neural network control policies that can directly process raw sensory input about the environment and the user to make decisions. These policies choose low-level controls to satisfy the high-level objective while minimizing disagreement with the user's commands. The algorithms will be deployed and tested on a wheelchair-mounted robot arm with the potential to assist users with upper extremity disabilities to perform activities of daily living.