Abstract

We consider the problem of learning in single-player and multiplayer multiarmed bandit models. Bandit problems are classes of online learning problems that capture exploration versus exploitation tradeoffs. In a multiarmed bandit model, players can pick among many arms, and each play of an arm generates an i.i.d. reward from an unknown distribution. The objective is to design a policy that maximizes the expected reward over a time horizon for a single-player setting and the sum of expected rewards for the multiplayer setting. In the multiplayer setting, arms may give different rewards to different players. There is no separate channel for coordination among the players. Any attempt at communication is costly and adds to regret. We propose two decentralizable policies, $\tt E^3$ ( $\tt E$ - $\tt cubed$ ) and $\tt E^3$ - $\tt TS$ , that can be used in both single-player and multiplayer settings. These policies are shown to yield expected regret that grows, at most, as $O(\log ^{1+\delta } T)$ (and $O(\log T)$ under some assumption). It is well known that $O(\log T)$ is the lower bound on the rate of growth of regret even in a centralized case. The proposed algorithms improve on prior work where regret grew at $O(\log ^2 T)$ . More fundamentally, these policies address the question of additional cost incurred in decentralized online learning, suggesting that there is, at most, an $\delta$ -factor cost in terms of the order of regret. This solves a problem of relevance in many domains and had been open for a while.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.