Abstract

We study the asymptotic optimal control of multi-class restless bandits. A restless bandit is a controllable stochastic process whose state evolution depends on whether or not the bandit is made active. Since finding the optimal control is typically intractable, we propose a class of priority policies that are proved to be asymptotically optimal under a global attractor property and a technical condition. We consider both a fixed population of bandits as well as a dynamic population where bandits can depart and arrive. As an example of a dynamic population of bandits, we analyze a multi-class $\mathit{M/M/S+M}$ queue for which we show asymptotic optimality of an index policy. We combine fluid-scaling techniques with linear programming results to prove that when bandits are indexable, Whittle’s index policy is included in our class of priority policies. We thereby generalize a result of Weber and Weiss [J. Appl. Probab. 27 (1990) 637–648] about asymptotic optimality of Whittle’s index policy to settings with (i) several classes of bandits, (ii) arrivals of new bandits and (iii) multiple actions. Indexability of the bandits is not required for our results to hold. For nonindexable bandits, we describe how to select priority policies from the class of asymptotically optimal policies and present numerical evidence that, outside the asymptotic regime, the performance of our proposed priority policies is nearly optimal.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.