Abstract

Dynamic programming is a fundamental approach to optimal control of dynamical systems. In this approach, one solves the so-called Hamilton-Jacobi-Bellman equation for the value function, and uses the verification theorem to construct and check the optimality of an admissible control. In the other hand, in almost all applications, it is rarely the case that one must find the optimal control. Rather, near-optimal controls are usually sufficient. In addition, there are many advantages of relaxing this requirement of optimality to one of near-optimality. For instance, since there are many more near-optimal controls than optimal controls to choose from, it may be possible to choose a near-optimal control that has a simpler structure than the optimal one. This raises an important question: How does one construct a near-optimal control? Moreover, how does one verify that a given control is near-optimal? In this paper, we study near-optimal control of systems governed by stochastic differential equations, in the framework of viscosity solutions. In this framework, we are able to dispense with the assumption, necessary in classical dynamic programming, that the value function is sufficiently smooth. Our main result is a stochastic verification theorem for near-optimality which can be used to construct as well as verify near-optimal controls.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.