Abstract

We present a unified analysis of methods for such a wide class of problems as variational inequalities, which include minimization and saddle point problems as special cases. The analysis is developed relying on the extragradient method, which is a classic technique for solving variational inequalities. We consider the monotone and strongly monotone cases, which correspond to convex-concave and strongly-convex-strongly-concave saddle point problems. The theoretical analysis is based on parametric assumptions about extragradient iterations. Therefore, it can serve as a strong basis for combining existing methods of various types and for creating new algorithms. Specifically, to show this, we develop new robust methods, including methods with quantization, coordinate methods, and distributed randomized local methods. Most of these approaches have never been considered in the generality of variational inequalities and have previously been used only for minimization problems. The robustness of the new methods is confirmed by numerical experiments with GANs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call