In this paper we study the existence of optimal trajectories associated with a generalized solution to the Hamilton-Jacobi-Bellman equation arising in optimal control. In general, we cannot expect such solutions to be differentiable. But, in a way analogous to the use of distributions in PDE, we replace the usual derivatives with “contingent epiderivatives” and the Hamilton-Jacobi equation by two “contingent Hamilton-Jacobi inequalities.” We show that the value function of an optimal control problem verifies these “contingent inequalities.” Our approach allows the following three results: (a) The upper semicontinuous solutions to contingent inequalities are monotone along the trajectories of the dynamical system. (b) With every continuous solutionV of the contingent inequalities, we can associate an optimal trajectory along whichV is constant. (c) For such solutions, we can construct optimal trajectories through the corresponding optimal feedback. They are also “viscosity solutions” of a Hamilton-Jacobi equation. Finally, we prove a relationship between superdifferentials of solutions introduced by Crandallet al. [10] and the Pontryagin principle and discuss the link of viscosity solutions with Clarke's approach to the Hamilton-Jacobi equation.
Read full abstract