The past few decades have witnessed tremendous growth in the deployment of robotic systems to improve production rate and product quality. The vast majority of these industrial robots are used for simple, repetitive tasks. Traditional robot controllers do not take advantage of the repetitive nature and the same performance error will persist in all repetitive executions. Consequently, over eighty percent of today's robots are restricted to tasks requiring relatively low precision. It is predicted, however, that future growth will be in high performance robotic systems for tasks such as electronic assembly. Learning Control, being able to learn from past experience to eliminate errors in future trials, has great potential to emerge as the controller of choice for future robotic systems. The bottleneck hindering this technology is its inability to handle non-minimum phase dynamics that arise in high-speed operation and in robotic systems with flexible components. The main objective of this project is to develop a new control methodology for the precise and high-speed execution of repetitive tasks for robotic systems with light-weight flexible components. The technical approach is inversion-based adaptive learning control that learns both the optimal control input and the system model from past execution data. This approach will resolve the above mentioned bottleneck by incorporating into the learning algorithms techniques from stable inversion, a theory recently developed for handling non-minimum phase systems. The project will also speed up the application of the new technology to industrial robotic systems by validating its effectiveness through simulation and experimental testing and by resolving technical issues arising in its practical implementation. Utilizing the PI's eight years of experience in robotic control and his expertise in stable inversion, the three year project will deliver the following: 1) a family of adaptive learning control algorithms designed to work for both minimum and non-minimum phase systems; 2) a set of nonlinear tools that enable the design and implementation of adaptive learning controllers for nonlinear systems that arise in light-weight robotic systems; 3) an experimental testbed with light-weight flexible components (a 3D crane reconfigurable for various testing purposes); 4) extensive simulation and experimental testing, validating the new control technology and demonstrating its high-performance in practical implementation. In addition to expanding the knowledge base in systems and control, the successful delivery of this control technology will have a significant impact on industries involved with robotic systems. The deployment of the new technology will bring about improved product quality since it empowers the robot to learn and eliminate repetitive errors. Production rate will be higher since the new controller can overcome difficulties caused by structural flexibility in high speed operation. Various cranes, currently in widespread use and manually controlled by skilled operators, can be automated using the new controller to achieve high performance at high speed, effectively converting them into programmable robots. The technology also promotes design and deployment of lighter-weight robotics systems, which in turn leads to faster speed, less energy consumption, less material waste, and other environmental and societal benefits. ***

Agency
National Science Foundation (NSF)
Institute
Division of Electrical, Communications and Cyber Systems (ECCS)
Application #
9810398
Program Officer
Radhakisan S. Baheti
Project Start
Project End
Budget Start
1998-09-01
Budget End
2002-08-31
Support Year
Fiscal Year
1998
Total Cost
$150,000
Indirect Cost
Name
Iowa State University
Department
Type
DUNS #
City
Ames
State
IA
Country
United States
Zip Code
50011