Flexibility and adaptability of a robotic manipulator can be achieved by incorporating vision and sensory information in the feedback loop. Previous research by the PI introduced a framework called ``controlled active vision'' for efficient integration of the vision sensor in the feedback loop. This framework emphasized eye-in-hand robotic systems (where the vision sensor is mounted on or close to the manipulator's end-effector) and was applied to the problem of robotic visual tracking and servoing with very promising results. This research extends the framework to other problems of eye-in-hand robotic systems such as the derivation of depth maps from controlled motion; the vision-guided, automatic grasping of moving objects; the active calibration of the robot-camera system; the problem of automatically detecting moving objects of interest; and the computation of the relative pose of the target from the camera. In addition, the new work investigates issues such as the stability and the robustness of the algorithms. All the work is experimentally verified on the Minnesota Robotic Visual Tracker (a flexible eye-in-hand robotic system). This research has potential applications to transportation (e.g., pedestrian detection and tracking, vision-based vehicle following), inspection, and assembly (e.g., vision-guided manipulation of moving objects).

Agency
National Science Foundation (NSF)
Institute
Division of Information and Intelligent Systems (IIS)
Application #
9502245
Program Officer
Jing Xiao
Project Start
Project End
Budget Start
1995-08-01
Budget End
1999-07-31
Support Year
Fiscal Year
1995
Total Cost
$145,000
Indirect Cost
Name
University of Minnesota Twin Cities
Department
Type
DUNS #
City
Minneapolis
State
MN
Country
United States
Zip Code
55455