In this project the PIs will design, develop, and clinically test a vision-based user interface for enhancing the efficacy of wheelchair mounted assistive robot arms in unstructured environments. The target population is wheelchair bound individuals with limited upper extremity movement, such as patients diagnosed with Cerebral Palsy, ALS, Poliomyelitis, Multiple Sclerosis, Spinal Cord Injury, Muscular Dystrophy, and similar conditions that affect use of the upper limbs. The goal is to allow these people to function independently with comfort and speed in a variety of unstructured environments such as a grocery store, a living room, or an office. The innovation in this project that sets it apart from existing approaches is the segregation of robot motion into gross and fine components instead of unnatural joint or Cartesian motion as is currently the norm. To transform their bold vision into reality, the PIs will: develop a gross motion human-robot interface utilizing computer vision techniques and the human in the loop; utilize computer image processing algorithms for implementing a real-time robust feature identifier that is able to suggest to the user areas of interest, using computer vision techniques to segment by color, depth and other criteria; effect fine motion of the robot end-effector to facilitate pick-and-place tasks via fusion of geometric ideas from vision and adaptive control; develop a working prototype through unification of HRI, sensing, and control algorithms; and demonstrate benchmark activities of daily living tasks for wheelchair bound individuals with upper extremity impairments by tapping into the human resources available at Good Shepherd Rehabilitation Hospital located in Pennsylvania's Lehigh Valley.

Broader Impacts: The design of an enhanced functionality wheelchair robot will be a major leap toward rehabilitation of a broad segment of society whose members otherwise have only limited access to resources and opportunity. The PIs expect their approach to be directly relevant to any mobile device with on-board vision and where one can take advantage of the human in the loop, and thus to provide a new model of human-robot interaction for assistive technology. Interaction methods developed will be adaptable to a wide range of access devices, ranging from single switch scanning to sip-and-puff to a joystick. Moreover, the PIs expect that the fusion of vision and nonlinear control demonstrated in this project will advance the theory and applicability of computer vision and visual servoing.

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
0649736
Program Officer
Ephraim P. Glinert
Project Start
Project End
Budget Start
2006-07-01
Budget End
2011-11-30
Support Year
Fiscal Year
2006
Total Cost
$349,287
Indirect Cost
Name
University of Central Florida
Department
Type
DUNS #
City
Orlando
State
FL
Country
United States
Zip Code
32816