The goal of this project is to develop and experimentally verify a skill-based approach to hand-eye coordination for robotic systems. The work focuses particularly on control of robot manipulators using weakly calibrated or uncalibrated stereo vision. The major innovations are: 1) the use of reconfigurable, feature-based tracking mechanisms that simplify image processing; 2) the use of projective invariant-based feedback controllers that perform correctly despite calibration error; and 3) the development of a taxonomy of geometric ``translation rules'' for converting the geometric specification of a task into a visual specification of a task. The research is driven by a series of benchmark problems chosen from the manipulation domain. In addition to software development and experimentation, theoretical methods for analyzing the stability of visual tracking and of hand-eye servoing systems will be developed. Methods for detecting and responding to execution errors will be investigated. The long-term goal of this work is to construct a system that can automatically synthesize and execute a vision-based task specification from a geometric task specification.