In this paper, a stochastic optimal control problem is considered for a continuous-time Markov chain taking values in a denumerable state space over a fixed finite horizon. The optimality criterion is the probability that the process remains in a target set before and at a certain time. The optimal value is a superadditive capacity of target sets. Under some minor assumptions for the controlled Markov process, we establish the dynamic programming principle, based on which we prove that the value function is a classical solution of the Hamilton-Jacobi-Bellman equation on a discrete lattice space. We then prove that there exists an optimal deterministic Markov control under the compactness assumption of control domain. We further prove that the value function is the unique solution of the HJB equation. We also consider the case starting from the outside of target set and give the corresponding results. Finally, we apply our results to two examples.
Read full abstract