Abstract

The paper is concerned with a discounted Markov decision process with an unknown parameter which is estimated anew at each stage. Algorithms are proposed which are intermediate between and include the “classical” (but time consuming) principle of estimation and control and the simpler nonstationary value iteration which converges more slowly. These algorithms perform one single policy improvement step after each estimation, and then the policy thus obtained is evaluated completely (policy-iteration) or incompletely (policy-value-iteration). The results show that especially both these methods lead to asymptotically discount optimal policies. In addition, these results are generalized to cases where systematic errors will not vanish when the number of stages increases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call