The biggest challenges facing robotics today are issues dealing with uncertainty, processor faults and self-organization. The proposed research embodies a novel yet realistic approach to visually guided control of multijoint movement in uncertain environments. It is based on a new network control theory that focuses on adaptive control principles, realized in parallel neural- like networks. This new adaptive control theory allows a control system to adapt to most sets of kinematics while it learns a "sense of space" by its own experience. It uses both sensor and effector constraints to adapt to its mechanical system over time and maintain its adaptation to unforseen changes in the mechanics and sensor- actuator calibrations. The essence of this proposal is involved in a representation of visual targets in terms of actuator signals used to manipulate those targets. The proposed controller will learn precise sensory-motor calibrations starting with only loosely defined relationships. It offers fast parallel processing that can be generalized to any number of joints at any resolution. The proposed research combines principles of neural organization within the constraints of engineering reality.

Project Start
Project End
Budget Start
1988-01-15
Budget End
1988-05-01
Support Year
Fiscal Year
1987
Total Cost
$69,020
Indirect Cost
Name
Wellesley College
Department
Type
DUNS #
City
Wellesley
State
MA
Country
United States
Zip Code
02481