We are interested in continuous-time, denumerable state controlled Markov chains (CMCs), with compact Borel action sets, and possibly unbounded transition and reward rates, under the discounted reward optimality criterion. For such CMCs, we propose a definition of a sequence of control models {ℳ n } converging to a given control model ℳ, which ensures that the discount optimal reward and policies of ℳ n converge to those of ℳ. As an application, we propose a finite-state and finite-action truncation technique of the original control model ℳ, which is illustrated by approximating numerically the optimal reward and policy of a controlled population system with catastrophes. We study the corresponding convergence rates.
Read full abstract