Millions of Americans are unable to independently perform activities of daily living (ADLs) such as dressing and grooming, and this number is rising rapidly as America?s aging and disabled population increases. This research focuses on designing and developing new algorithms and software systems that would help enable personal robots to autonomously compute safe motions to assist disabled and elderly individuals with ADLs. The proposed framework uses kinesthetic demonstrations to teach the robot desirable motion trajectories to accomplish several specific ADL assistance tasks. Based on these demonstrations, the research focuses on developing new computational methods to extract task constraints for desirable motion trajectories using learning methods based on Gaussian mixture models in conjunction with machine learning and 3D registration methods. A key element of this project involves investigation of methods to deformably register and generalize the motion trajectories and task constraints across individuals of different shapes and sizes. In order to generate safe plans in dynamic real-world settings, the proposed research investigates new highly parallel algorithms that effectively utilize the power of modern general purpose graphics processing units (GPUs) for real-time planning in uncertain environments. The framework is evaluated using articulated mannequin testbeds.
This project brings together an interdisciplinary team with computer science, robotics, and occupational therapy expertise. The project integrates research with education through community outreach activities. In the long term, the methods developed in the proposed research could have broad societal benefits by helping enable personal robots to assist disabled and elderly individuals with ADLs, allowing them to safely stay in their homes rather than moving to costly institutions.