Abstract
General state space valued optimal stochastic control problems are often computationally intractable. On the other hand, for finite state-action models, there exist powerful computational and simulation tools for computing optimal strategies. With this motivation, we consider finite state and action space approximations of discrete time Markov decision processes with discounted and average costs and compact state and action spaces. Stationary policies obtained from finite state approximations of the original model are shown to approximate the optimal stationary policy with arbitrary precision under mild technical conditions. These results complement recent work that studied the finite action approximation of discrete time Markov decision process with discounted and average costs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.