9309960 Smith This is the first-year funding of a three-year continuing award. This research addresses the task of finding the optimal control policy for complex, nonlinear, and uncertain systems, given a set of control objectives. The majority of effort so far in intelligent control systems has been focused on implementation or function approximation for complex, nonlinear, and uncertain systems. It is assumed a priori that the given training information best satisfies the control objectives without a direct attempt to find the optimal system control policy with respect to the control objectives. Conventional optimal control methods are either limited to systems that are amenable to mathematical analysis or else involve enormous amounts of computation in the form of dynamic programming. Intelligent control approaches that address the task of determining the optimal control policy use the same basic strategy as dynamic programming, only in less structured form. This includes Back- propagation Through Time (BTT) methods and systems that employ reinforcement learning or adaptive critics. This research project uses a computationally efficient means of dynamic programming based on a cell state space approach that makes it possible to do multiple iterations of the algorithm in realistic time scales. These methods will be applied to the six-degree-of-freedom flight control of Autonomous Underwater Vehicles under development at Florida Atlantic University. One application of these AUVs is a joint effort between the University of South Florida's Marine Science Department and FAU to perform a set of long range oceanographic surveys in the Gulf of Mexico to determine bottom composition and water quality. Because underwater vehicles can move freely in three dimensions in relatively uncluttered environments, the immediate potential for valuable contributions by autonomous robotic systems may be even greater in the ocean than on land.