Abstract

This paper establishes a general analytical framework for continuous-time stochastic control problems for an ambiguity-averse agent (AAA) with time-inconsistent preference, where the control problems do not satisfy Bellman's principle of optimality. The AAA is concerned about model uncertainty in the sense that she is not completely confident in the reference model of the controlled Markov state process and rather considers some similar alternative models. The problems of interest are studied within a set of dominated models and the AAA seeks for an optimal decision that is robust with respect to model risks. We adopt a game-theoretic framework and the concept of subgame perfect Nash equilibrium to derive an extended dynamic programming equation and extended Hamilton -- Jacobi -- Bellman -- Isaacs equations for characterizing the robust dynamically optimal control of the problem. We also prove a verification theorem to theoretically support our construction of robust control. To illustrate the tractability of the proposed framework, we study an example of robust dynamic mean-variance portfolio selection under two cases: 1. constant risk aversion; and 2. state-dependent risk aversion.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.