Abstract

In this paper, we propose an approach for approximating the value function and an ϵ-optimal policy of continuous-time Markov decision processes with Borel state and action spaces, with possibly unbounded cost and transition rates, under the total expected discounted cost optimality criterion. Under adequate assumptions, which in particular include that the transition rate has a density function with respect to a reference measure, together with piecewise Lipschitz continuity of the elements of the control model, we approximate the original controlled process by a model with finite state and action spaces. The approximation error is related to the 1-Wasserstein distance between suitably defined probability measures and approximating measures with finite support. We also study the case when the reference measure is approximated with empirical distributions and we show that convergence of the approximations takes place at an exponential rate in probability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call