Existing maximum principles of Pontryagin’s type and related optimality conditions, such as, e.g., the ones derived by F. Clarke, B. Kaskosz and S. Lojasiewicz Jr., and H.J. Sussmann, can be strengthened up to global necessary optimality conditions in the form of so-called feedback minimum principle. This is possible for both classical and non-smooth optimal control problems without terminal constraints. The formulation of the feedback minimum principle (or related extremality conditions) remains within basic constructions of the mentioned maximum principles (the Hamiltonian or Pontryagin function, the adjoint differential equation or inclusion, and its solutions –– co-trajectories). At the same time, the actual maximum condition –– maximization of the Hamiltonian –– takes a variational form: any optimal trajectory of the addressed problem should be optimal for a specific “accessory” problem of dynamic optimization. The latter is stated over all tubes of Krasovskii-Subbotin constructive motions generated by feedback strategies, which are extremal with respect to a certain supersolution of the Hamilton-Jacobi equation. Such a supersolution can be represented explicitely in terms of the co-trajectory of a reference control process and the terminal cost function. In a general version, the feedback minimum principle operates with generalized solutions of the proximal Hamilton-Jacobi inequality for weakly decreasing (𝑢-stable) functions
Read full abstract