Abstract

We consider the decentralized multi-armed bandit problem with distinct arms for each players. Each player can pick one arm at each time instant and can get a random reward from an unknown distribution with an unknown mean. The arms give different rewards to different players. If more than one player select the same arm, everyone gets a zero reward. There is no dedicated control channel for communication or coordination among the user. We propose an online learning algorithm called dUCB 4 which achieves a near-O(log2 T). The motivation comes from opportunistic spectrum access by multiple secondary users in cognitive radio networks wherein they must pick among various wireless channels that look different to different users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call