Abstract

We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. This is similar to the approach taken in (A. Nemirovski (2004), SIAM J. Optim., 15, pp. 229--251) which analyzes EG as an approximation of the “conceptual mirror prox.” In this paper, we highlight how gradients used in OGDA and EG try to approximate the gradient of the proximal point method. We then exploit this interpretation to show that both algorithms produce iterates that remain within a bounded set. We further show that the primal-dual gap of the averaged iterates generated by both of these algorithms converge with a rate of $\mathcal{O}(1/k)$. Our theoretical analysis is of interest as it provides the first convergence rate estimate for OGDA in the general convex-concave setting. Moreover, it provides a simple convergence analysis for the EG algorithm in terms of function value without using a compactness assumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call