Abstract

This paper presents a fresh perspective on the Markov reward process. In order to bring Howard's [Howard, R. A. 1969. Dynamic Programing and Markov-Process. The M.I.T. Press, 5th printing.] model closer to practical applicability, two very important aspects of the model are restated: (a) We make the rewards random variables instead of known constants, and (b) we allow for any decision rule over the moment set of the portfolio distribution, rather than assuming maximization of the expected value of the portfolio outcome. These modifications provide a natural setting for the rewards to be normally distributed, and thus, applying the mean variance models becomes possible. An algorithm for solution is presented, and a special case: the mean-variability models decision rule of maximizing (μ/σ) is worked out in detail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call