Abstract

In this paper we study both bias optimality and strong n ( n = − 1 , 0 ) discount optimality for denumerable discrete-time Markov decision processes. The rewards may have neither upper nor lower bounds. We give sufficient conditions on the system's primitive data, and under which we prove (1) the existence of the bias optimality equation and bias optimal policies; (2) a condition equivalent to bias optimal policies; (3) average expected reward optimality and strong −1-discount optimality are equivalent; (4) bias optimality and strong 0-discount optimality are equivalent; (5) the existence of strong n ( n = − 1 , 0 ) discount optimal stationary policies. Our conditions are weaker than those in the previous literature. Moreover, our results are illustrated by a controlled random walk.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call