This paper addresses some open issues in optimization of generic certainty equivalents. Such equivalents have been modelled using increasing functionals of the discounted sums of the per-stage unbounded-above cost or reward functions defined on the paths of the underlying controlled Markov chain on general state spaces which models the random dynamics of the system. Examples of such functionals include logarithmic and power utilities as well as the robust Risk-Sensitive preferences among others. The critical results that were obtained were the solutions of this problem for generic unbounded-above per-stage cost minimization and for per stage reward maximization, both satisfying a w-growth (hence unbounded) condition in the nite horizon setup. In the process, we establish certain nontrivial closure properties of the dynamic programming operators. In addition, we provide a real-life example from Portfolio Consumption.
Read full abstract