Abstract

We study the online saddle point problem, an online learning problem where at each iteration, a pair of actions needs to be chosen without knowledge of the current and future (convex-concave) payoff functions. The objective is to minimize the gap between the cumulative payoffs and the saddle point value of the aggregate payoff function, which we measure using a metric called saddle point regret (SP-Regret). The problem generalizes the online convex optimization framework, but here, we must ensure that both players incur cumulative payoffs close to that of the Nash equilibrium of the sum of the games. We propose an algorithm that achieves SP-Regret proportional to [Formula: see text] in the general case, and [Formula: see text] SP-Regret for the strongly convex-concave case. We also consider the special case where the payoff functions are bilinear and the decision sets are the probability simplex. In this setting, we are able to design algorithms that reduce the bounds on SP-Regret from a linear dependence in the dimension of the problem to a logarithmic one. We also study the problem under bandit feedback and provide an algorithm that achieves sublinear SP-Regret. We then consider an online convex optimization with knapsacks problem motivated by a wide variety of applications, such as dynamic pricing, auctions, and crowdsourcing. We relate this problem to the online saddle point problem and establish [Formula: see text] regret using a primal-dual algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call