The aim of this proposal is to make an essential contribution to the mathematical progress of three classes of such problems by actually producing different optimality criteria and furnishing stability analysis results. The three classes are under three main headings: the generalized problem of Bolza, the continuous-time optimal control, and the discrete-time optimal control. The first class, the generalized problem of Bolza, falls in the heart of nonsmooth analysis where the regularity of the integrand is lacking but compensated for by the relative regularity of its Hamiltonian. Despite the fact that under certain hypotheses the second class can be viewed as a subclass of the first, it is quite beneficial to study this class directly on its own to obtain results that take into account the special and rich structure of the problem. This direction of research produces for the optimal control problems, results that are, in general, distinct from those obtained by applying the results of the generalized problem of Bolza. Furthermore, these two sets of results hold for the optimal control setting under different sets of hypotheses. The third class of problems is important for numerically solving the continuous-time optimal control problem via discretization. For the three classes, second-order necessary and sufficient conditions would be derived in terms of a quadratical functional, conjugate point theory, and a Riccati equation. Furthermore, results on the stability and sensitivity of solutions to perturbations would be investigated. The Hamilton-Jacobi theory would be employed, as well as the latest techniques from variational and nonsmooth analysis.

Optimization problems with dynamical structure have become prominent in a variety of disciplines as mathematical models of systems with time evolution. In particular, optimal control theory is a subject that was developed in the 1950's mainly to deal with applications arising from several disciplines. This includes engineering, operations management and economics. Most problem formulation were initially derived from engineering considerations which lead to the presence of control and state constraints. This is the case with the "soft moon landing problem" and the "rocket car problem". Most recently, the formulation of the optimal control problem has been adapted to include considerations from other areas like management and economics. This accounts for the increased interest in optimal control problems with constraints containing both the state and the control variable. Therefore, providing new tools and better techniques to tackle these problems and to help in computing their solutions would have an important impact on the applications emanating from those disciplines. This is exactly the goal of the present proposal. In fact, the intent is to derive optimality criteria for control problems with different types of constraints. Furthermore, the stability of these criteria under small perturbations is analysed. This latter is crucial for applications, due to the error in measurements.

Agency
National Science Foundation (NSF)
Institute
Division of Mathematical Sciences (DMS)
Type
Standard Grant (Standard)
Application #
0072598
Program Officer
Deborah Lockhart
Project Start
Project End
Budget Start
2000-08-01
Budget End
2003-07-31
Support Year
Fiscal Year
2000
Total Cost
$90,000
Indirect Cost
Name
Michigan State University
Department
Type
DUNS #
City
East Lansing
State
MI
Country
United States
Zip Code
48824