Abstract

Abstract : The martingale treatment of stochastic control problems is based on the idea that the correct formulation of Bellman's principle of optimality for stochastic minimization problems is in terms of a submartingale inequality: the value function of dynamic programming is always a submartingale and is a martingale under a particular control strategy if and only if that strategy is optimal. Local conditions for optimality in the form of a minimum principle can be obtained by applying Meyer's submartingale decomposition along with martingale representation theorems; conditions for existence of an optimal strategy can also be stated. This paper gives an introduction to these methods and a survey of the results that have been obtained so far, as well as an indication of some shortcomings in the theory and open problems. The martingale approach to some related problems - optimal stopping, impulse control and stochastic differential games - will also be outlined.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.