Optimized certainty equivalents (OCEs) are a family of risk measures widely used by both practitioners and academics. This is mostly due to its tractability and the fact that it encompasses important examples, including entropic risk measures and average value-at-risk. In this work we consider stochastic optimal control problems where the objective criterion is given by an OCE risk measure or, in other words, a risk minimization problem for controlled diffusions. A major difficulty arises since OCEs are often time-inconsistent. Nevertheless, via an enlargement of state space we achieve a substitute of sorts for time-consistency in fair generality. This allows us to derive a dynamic programming principle and thus recover central results of (risk-neutral) stochastic control theory. In particular, we show that the value of our risk minimization problem can be characterized as a viscosity solution of a Hamilton--Jacobi--Bellman--Isaacs equation. We further establish a comparison principle and uniqueness of the latter under suitable technical conditions.
Read full abstract