Abstract

We investigate the problem of online optimization under adversarial perturbations. In each round of this repeated game, a player selects an action from a decision set using a randomized strategy, and then Nature reveals a loss function for this action, for which the player incurs a loss. The game then repeats for a total of $T$ rounds, over which the player seeks to minimize the total incurred loss, or more specifically, the excess incurred loss with respect to a fixed comparison class. The added challenge over traditional online optimization, is that for $k$ of the $T$ rounds, after the player selects an action, an adversarial agent perturbs this action arbitrarily. Through a worst case adversary framework to model the perturbations, we introduce a randomized algorithm that is provably robust against such adversarial attacks. In particular, we show that this algorithm is Hannan consistent with respect to a rich class of randomized strategies under mild regularity conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.