Abstract

This brief studies a variation of the stochastic multiarmed bandit (MAB) problems, where the agent knows the a priori knowledge named the near-optimal mean reward (NoMR). In common MAB problems, an agent tries to find the optimal arm without knowing the optimal mean reward. However, in more practical applications, the agent can usually get an estimation of the optimal mean reward defined as NoMR. For instance, in an online Web advertising system based on MAB methods, a user's near-optimal average click rate (NoMR) can be roughly estimated from his/her demographic characteristics. As a result, application of the NoMR is efficient at improving the algorithm's performance. First, we formalize the stochastic MAB problem by knowing the NoMR that is in between the suboptimal mean reward and the optimal mean reward. Second, we use the cumulative regret as the performance metric for our problem, and we get that this problem's lower bound of the cumulative regret is Ω(1/∆) , where ∆ is the difference between the suboptimal mean reward and the optimal mean reward. Compared with the conventional MAB problem with the increasing logarithmic lower bound of the regret, our regret lower bound is uniform with the learning step. Third, a novel algorithm, NoMR-BANDIT, is set forth to solve this problem. In NoMR-BANDIT, the NoMR is used to design an efficient exploration strategy. In addition, we analyzed the regret's upper bound in NoMR-BANDIT and concluded that it also has a uniform upper bound of O(1/∆) , which is in the same order as the lower bound. Consequently, NoMR-BANDIT is an optimal algorithm of this problem. To enhance our method's generalization, CASCADE-BANDIT based on NoMR-BANDIT is proposed to solve the problem, where NoMR is less than the suboptimal mean reward. CASCADE-BANDIT has an upper bound of O(∆logn) , where n represents the learning step, and the order of O(∆logn) is the same with that of the conventional MAB methods. Finally, extensive experimental results demonstrated that the established NoMR-BANDIT is more efficient than the compared bandit solutions. After sufficient iterations, NOMR-BANDIT saved 10%-80% more cumulative regret than the state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call