Abstract

We develop a theory for continuous-time non-Markovian stochastic control problems which are inherently time-inconsistent. Their distinguishing feature is that the classical Bellman optimality principle no longer holds. Our formulation is cast within the framework of a controlled non-Markovian forward stochastic differential equation, and a general objective functional setting. We adopt a game-theoretic approach to study such problems, meaning that we seek for sub-game perfect Nash equilibrium points. As a first novelty of this work, we introduce and motivate a refinement of the definition of equilibrium that allows us to establish a direct and rigorous proof of an extended dynamic programming principle, in the same spirit as in the classical theory. This in turn allows us to introduce a system consisting of an infinite family of backward stochastic differential equations analogous to the classical HJB equation. We prove that this system is fundamental, in the sense that its well-posedness is both necessary and sufficient to characterise the value function and equilibria. As a final step we provide an existence and uniqueness result. Some examples and extensions of our results are also presented.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.