Abstract

We determine sharp bounds on the price of bandit feedback for several variants of the mistake-bound model. The first part of the paper presents bounds on the r-input weak reinforcement model and the r-input delayed, ambiguous reinforcement model. In both models, the adversary gives r inputs in each round and only indicates a correct answer if all r guesses are correct. The only difference between the two models is that in the delayed, ambiguous model, the learner must answer each input before receiving the next input of the round, while the learner receives all r inputs at once in the weak reinforcement model.In the second part of the paper, we investigate models for online learning with permutation patterns, in which a learner attempts to learn a permutation from a set of permutations by guessing statistics related to sub-permutations. For these permutation models, we prove sharp bounds on the price of bandit feedback. One of our lower bounds for online learning of permutations improves on a lower bound of Goldman, Rivest, and Schapire (1993) and matches an upper bound from the same paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call