Abstract

In this paper we propose a technique to accelerate the convergence rate of the value iteration (VI) algorithm applied to discrete average cost Markov decision processes (MDP). The convergence rate is measured with respect to the total computational effort instead of the iteration counter. Such a rate definition makes it possible to compare different classes of algorithms, which employ distinct and possibly variable updating schemes. A partial information value iteration (PIVI) algorithm is proposed that updates an increasingly accurate approximate version of the original problem with a view toward saving computations at the early stages of the algorithm, when one is typically far from the optimal solution. The PIVI overall computational effort is compared with that of the classical VI algorithm for a broad set of parameters. The results suggest that a suitable choice of parameters can lead to significant computational savings in the process of finding the optimal solution for discrete MDP under the average cost criterion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call