Abstract

General state space valued optimal stochastic control problems are often computationally intractable. On the other hand, for finite state-action models, there exist powerful computational and simulation tools for computing optimal strategies. With this motivation, we consider finite state and action space approximations of discrete time Markov decision processes with discounted and average costs and compact state and action spaces. Stationary policies obtained from finite state approximations of the original model are shown to approximate the optimal stationary policy with arbitrary precision under mild technical conditions. These results complement recent work that studied the finite action approximation of discrete time Markov decision process with discounted and average costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call