Abstract

Dynamic Programming identifies the value function of continuous time optimal control with a solution to the Hamilton-Jacobi Equation, appropriately defined. This relationship in turn leads to sufficient conditions of global optimality, which have been widely used to confirm the optimality of putative minimizers. In continuous time optimal control, the dynamic programming methodology has been used for problems with slate space a vector space. However there are many problems of interest in which it is necessary to regard the state space as a manifold. This paper extends dynamic programming to cover problems in which the state space is a general finite dimension C <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">∞</sup> manifold. The application of these results is illustrated by the investigation of minimum time controllers for a rigid pendulum.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.