By adopting a distributional viewpoint on law-invariant convex risk measures, we construct dynamic risk measures (DRMs) at the distributional level. We then apply these DRMs to investigate Markov decision processes, incorporating latent costs, random actions, and weakly continuous transition kernels. Furthermore, the proposed DRMs allow risk aversion to change dynamically. Under mild assumptions, we derive a dynamic programming principle and show the existence of an optimal policy in both finite and infinite time horizons. Moreover, we provide a sufficient condition for the optimality of deterministic actions. For illustration, we conclude the paper with examples from optimal liquidation with limit order books and autonomous driving. Funding: This work was supported by Natural Sciences and Engineering Research Council of Canada [Grants RGPAS-2018-522715 and RGPIN-2018-05705].
Read full abstract