Abstract
In the multi-armed bandit framework, there are two formulations that are commonly employed to handle time-varying reward distributions: adversarial bandit and nonstationary bandit. Although their oracles, algorithms, and regret analysis differ significantly, we provide a unified formulation in this paper that smoothly bridges the two as special cases. The formulation uses an oracle that takes the best action sequences within a switch budget. Depending on the switch budget, it turns into the oracle in hindsight in the adversarial bandit and the dynamic oracle in the nonstationary bandit. We provide algorithms that attain the optimal regret with the matching lower bound. The optimal regret displays distinct behavior in two regimes.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have