Morton and Wecker (1977) stated that the value iteration algorithm solves a dynamic program's policy function faster than its value function when the limiting Markov chain is ergodic. I show that their proof is incomplete, and provide a new proof of this classic result. I use this result to accelerate the estimation of Markov decision processes and the solution of Markov perfect equilibria. Markov decision process Markov perfect equilibrium strong convergence relative value iteration dynamic discrete choice nested fixed point nested pseudo‐likelihood C01 C13 C15 C61 C63 C65
Read full abstract