Assistive robots promise to improve the lives of many people with disabilities in the near future. But whether due to traumatic spinal cord injury, early onset multiple sclerosis, or the common effects of advancing age, the variety of physical and mental disabilities, and the different psychological reaction of each individual to them, make it impossible to program one-size-fits-all behaviors for assistive robots. To achieve its full potential, the assistive robot must learn to match the type and degree of assistance offered to the disability level and preferences of the user, as well as to the user's environment and the level of trust between the user and the robot. Thus, training the robot to fit the individual user is essential - but requiring all users to train all aspects of robot behavior is unrealistic. In this collaborative project involving faculty at two institutions, the PIs argue that a possible solution may derive from the observation that whenever a user needs to train a robot for a new behavior, it is likely that there are other users with similar disabilities, preferences and environments who might also benefit from this behavior. The PIs will develop techniques which enable the learning of behaviors in human+robot pairs, the identification of possible beneficiaries of the new behaviors, and the transfer of these behaviors to these beneficiaries (where transferring a behavior from one human+robot pair to another might involve the transfer of code and data for the robot and/or the transfer of skills to the human user). This research will demonstrate how mixed human+robot interaction can alter the relationship between users and their environment, while also rendering physical interaction between robot and human safer and more efficient. The work will have broad national impact because of the expected rapid growth in coming years of the elderly segment of the population.
The PIs will pursue four thrusts to achieve their vision. They will design adaptive algorithms and controllers (e.g., for sliding-scale robot autonomy) which allow a robot to be an effective facilitator of user interaction with novel environments during activities of daily living (ADLs). They will develop models of human+robot trust in the context of assistive robot technology, and examine the effect of trust on the user experience. They will implement social agents through which the community of users with a specific disability via their social networks can help in the creation and adoption of new solutions for ADL tasks. And they will validate the ability of human+robot exchanges to increase functionality and performance of ADLs for disabled individuals. The research will build on recent advances in robot control, psychological models of social learning, and models of social networks, as well as machine learning techniques of collaborative filtering and recommendation. Project outcomes will include the creation of social agents that can interact on behalf of the user, discover learning opportunities, and actively participate in the transfer of learning. The work will contribute to our understanding of how users can partner, both individually and collectively, with assistive robots, and will answer open questions relating to the interoperability and intelligibility of knowledge developed in one learning system to another.