Optimal control is fundamental to all endeavors that apply dynamics and control: if any system is to be forced to do something, it can be forced to do so in an optimal way. Historically there has been no single, systematic procedure for the solution of non-linear optimal feedback control laws that can be applied to a given optimal control problem across a variety of boundary conditions. As different boundary or terminal conditions are applied to the system, the nature of the optimal feedback control laws can change drastically and with no apparent pattern. This is a fundamental difficulty, and implies that the optimal control law for a given dynamical system must be "re-solved" as the boundary conditions and targets for the system change. The research we are proposing directly addresses this limitation.
Starting from the same basic foundations from which the Hamilton-Jacobi-Bellman equation is derived, we have developed a new approach to solving optimal feedback control problems that overcome some of these barriers to truly reconfigurable control. Using the classical theory of canonical transformations (corresponding to the solution flow of the Hamiltonian system derived from the necessary conditions for optimality) we are able to pose a formal solution to the optimal control problem with arbitrary boundary conditions placed on the dynamical system. These formal results have proven to be fruitful, as we have been able to develop an explicit solution procedure that finds an analytical form for the non-linear optimal feedback control law for a general class of problems. Furthermore, our approach can provide an explicit algorithm for reconfiguring optimal feedback controls to deal with changes in boundary conditions and terminal constraints, so long as the cost function and dynamics (i.e., the Hamiltonian function) remains the same.
We will continue the development of our approach and explore the application of our applied methodology to larger classes of control problems, including those with control constraints, state constraints, under-actuated controls, and non-analytic cost functions. The main outcomes of this research will be a new theoretical formalism for solving and analyzing optimal control problems and a computational tool that can generate optimal feedback control laws for a general class of systems. Both our new formalism and this tool will be of great use in educational and research settings, and will be made available to students taking graduate courses in optimal control at Michigan.