Abstract

This paper deals with the finite approximation of the first passage models for discrete-time Markov decision processes with varying discount factors. For a given control model ź$\mathcal {M}$ with denumerable states and compact Borel action sets, and possibly unbounded reward functions, under reasonable conditions we prove that there exists a sequence of control models źn$\mathcal {M}_{n}$ such that the first passage optimal rewards and policies of źn$\mathcal {M}_{n}$ converge to those of ź$\mathcal {M}$, respectively. Based on the convergence theorems, we propose a finite-state and finite-action truncation method for the given control model ź$\mathcal {M}$, and show that the first passage optimal reward and policies of ź$\mathcal {M}$ can be approximated by those of the solvable truncated finite control models. Finally, we give the corresponding value and policy iteration algorithms to solve the finite approximation models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.