The focus of this project is on human-robot interactions (HRI) during teleoperation, for both the fundamental research and the educational activities. The research derives from the observation that object grasping and manipulation using conventional teleoperation approaches places a significant control burden on the human operator and reduces task performance. This is because indirect manipulation of an object through a robot hand may cause inaccurate or undesired robot motion, while the limited control inputs available to the operator (e.g., joystick, data glove) make direct kinematic mapping challenging for complex object manipulation tasks. So the human operator must mentally and physically transform (e.g., rotate, translate, scale, deform) the desired robot actions to required inputs at the interface; these transformations significantly increase control difficulty. The primary research goal of this project is to develop a novel goal-guided self-reflective control interface (GSRCI), which will enable the robot to understand the operator's high-level objective during an object-grasping operation and to conform to task constraints in order to reduce control difficulties and ensure the success of subsequent manipulation. The primary educational activity derives from the common deficiency of distance learning programs supported by existing teleconferencing technologies, namely that they offer limited or no opportunities for hands-on learning. To address this problem, an interactive distance learning system (IDLS) will be developed that immerses remote students in the classroom environment through student tele-controlling of a robot's arms and hands for object manipulation and/or interaction with other classmates, thereby enabling remote users to feel present in the classroom and engaged in class activities. The task modeling technology as well as the goal-guided control interface technology from the research work will be coupled to develop an easy and intuitive teleoperation-based distance learning system for K-12 students. The GSRCI represents transformative technology with the potential to provide a new paradigm of HRI that will significantly improve the power and quality of teleoperation and broadly impact applications related to diverse domains including assistance for the elderly and disabled, minimally invasive surgery, space and underwater explorations, military reconnaissance, nuclear servicing, and urban search and rescue. The IDLS will foster engagement for remote students and allow them to successfully participate in STEM activities, offering disadvantaged groups the potential to learn regardless of their ability to physically attend a class setting.
To achieve these goals, a cognitive interface will be created that enables a robot to flexibly reproduce actions that accommodate the operator's motion inputs as well as autonomously regulate these actions in a self-reflective manner to compensate task constraints that facilitate subsequent manipulations. A novel goal-achievement indicator will predict the level of goal accomplishment for the planned action. Additionally, a goal-guided remedial planner will regulate this action to accomplish the goal by relaxing the constraint bound of the operator's motion inputs using an adaptive local search strategy. New models for human and robot task-based grasp behaviors will also be developed, which will provide a knowledge base to infer human goals in a task and to conduct goal-guided robot-grasp planning. To link quantified task constraints with symbolic tasks, account for uncertainty in task modeling, and allow task reasoning from partially observed data, a directed probabilistic Bayesian model will encode the statistical dependence among the task goal, object attributes, actions, and task constraints.