Abstract

AbstractIn this article we consider a continuous‐time Markov decision process with a denumerable state space and nonzero terminal rewards. We first establish the necessary and sufficient optimality condition without any restriction on the cost functions. The necessary condition is derived through the Pontryagin maximum principle and the sufficient condition, by the inherent structure of the problem. We introduce a dynamic programming approximation algorithm for the finite‐horizon problem. As the time between discrete points decreases, the optimal policy of the discretized problem converges to that of the continuous‐time problem in the sense of weak convergence. For the infinite‐horizon problem, a successive approximation method is introduced as an alternative to a policy iteration method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call