Abstract

In this paper we study the existence of optimal trajectories associated with a generalized solution to Hamilton-Jacobi-Bellman equation arising in optimal control. In general, we cannot expect such solutions to be differentiable. But, in a way analogous to the use of distributions in PDE, we replace the usual derivatives with "contingent epiderivatives" and the Hamilton-Jacobi equation by two "contingent Hamilton-Jacobi inequalities". We show that the value function of an optimal control problem verifies these "contingent inequalities". Our approach allows the following three results: (a) The upper semicontinuous solutions to contingent inequalities are monotone along the trajectories of the dynamical system. (b) With every continuous solution V of the contingent inequalities, we can associate an optimal trajectory along which V is constant. (c) For such solutions, we can construct optimal trajectories through the corresponding optimal feedback. They are also "viscosity solutions" of a Hamilton-Jacobi equation. Finally we discuss the link of viscosity solutions with Clarke's approach to the Hamilton-Jacobi equation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call