Abstract

Dynamic programming is a fundamental approach to optimal control of dynamical systems. In this approach, one solves the so-called Hamilton-Jacobi-Bellman equation for the value function, and uses the verification theorem to construct and check the optimality of an admissible control. In the other hand, in almost all applications, it is rarely the case that one must find the optimal control. Rather, near-optimal controls are usually sufficient. In addition, there are many advantages of relaxing this requirement of optimality to one of near-optimality. For instance, since there are many more near-optimal controls than optimal controls to choose from, it may be possible to choose a near-optimal control that has a simpler structure than the optimal one. This raises an important question: How does one construct a near-optimal control? Moreover, how does one verify that a given control is near-optimal? In this paper, we study near-optimal control of systems governed by stochastic differential equations, in the framework of viscosity solutions. In this framework, we are able to dispense with the assumption, necessary in classical dynamic programming, that the value function is sufficiently smooth. Our main result is a stochastic verification theorem for near-optimality which can be used to construct as well as verify near-optimal controls.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call