Abstract

AbstractWe introduce a general framework for Markov decision problems under model uncertainty in a discrete‐time infinite horizon setting. By providing a dynamic programming principle, we obtain a local‐to‐global paradigm, namely solving a local, that is, a one time‐step robust optimization problem leads to an optimizer of the global (i.e., infinite time‐steps) robust stochastic optimal control problem, as well as to a corresponding worst‐case measure. Moreover, we apply this framework to portfolio optimization involving data of the . We present two different types of ambiguity sets; one is fully data‐driven given by a Wasserstein‐ball around the empirical measure, the second one is described by a parametric set of multivariate normal distributions, where the corresponding uncertainty sets of the parameters are estimated from the data. It turns out that in scenarios where the market is volatile or bearish, the optimal portfolio strategies from the corresponding robust optimization problem outperforms the ones without model uncertainty, showcasing the importance of taking model uncertainty into account.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.