Abstract

This paper considers restless multiarmed bandit (RMAB) problem involving partially observed Markov decision processes with heterogeneous state transition matrices in discrete time slots. Due to the notorious PSPACE-hardness of the generic RMAB, we turn to the myopic policy, i.e., scheduling the best processes each time, since the myopic policy is one of the policies with linear computation complexity and popularly adopted in practice. Intuitively, the myopic policy is not optimal because it only exploits information without exploring. In this paper, to show the optimality of the myopic policy, we first generalize the concept of the first order stochastic dominance ordering and then decompose each state transition matrix into a weighted sum of a series of eigenmatrices in which the weights are determined by the corresponding eigenvalues. Furthermore, we identify two sets of sufficient conditions on both eigenvalues and eigenmatrices to guarantee the optimality of the myopic policy for two instances, respectively. The theorectical results provide some insights into the structure of the optimal policy for the generic RMAB.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call