Abstract

This paper deals with a continuous-time Markov decision process \( {\mathcal{M}} \), with Borel state and action spaces, under the total expected discounted cost optimality criterion. By suitably approximating an underlying probability measure with a measure with finite support and by discretizing the action sets of the control model, we can construct a finite state and action space Markov decision process that approximates \( {\mathcal{M}} \) and that can be solved explicitly. We can derive bounds on the approximation error of the optimal discounted cost function; such bounds are written in terms of Wasserstein and Hausdorff distances. We show a numerical application to a queueing problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call