Abstract

A simple, system theoretic proof is given that the value function of deterministic optimal control is the unique generalized solution to the Hamilton-Jacobi equation. Previous proofs have for the most part involved the use of abstract analytical techniques, which establish uniqueness of viscosity solutions for general classes of partial differential equations. However, as we show, there are alternatives to employing the machinery of viscosity solutions to establish uniqueness of solutions for the specific Hamilton-Jacobi equation of deterministic optimal control; instead a proof may be given based on an analysis of state trajectories and their approximations and on familiar dynamic programming ideas. We describe difficuties which have previously arisen in following the system theoretic approach, and show how to overcome them.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call