We consider an infinite-horizon discounted constrained Markov decision process (CMDP) with uncertain transition probabilities. We assume that the uncertainty in transition probabilities has a rank-1 matrix structure and the underlying uncertain parameters belong to a polytope. We formulate the uncertain CMDP problem using a robust optimization framework. To derive reformulation of the robust CMDP problem, we restrict to the class of stationary policies and show that it is equivalent to a bilinear programming problem. We provide a simple example where a Markov policy performs better than the optimal policy in the class of stationary policies, implying that, unlike in classical CMDP problem, an optimal policy of the robust CMDP problem need not be present in the class of stationary policies. For the case of a single uncertain parameter, we propose sufficient conditions under which an optimal policy of the restricted robust CMDP problem is unaffected by uncertainty. The numerical experiments are performed on randomly generated instances of a machine replacement problem and a well-known class of problems called Garnets.
Read full abstract