Abstract

We develop a new stochastic algorithm for solving pseudomonotone stochastic variational inequalities. Our method builds on Tseng’s forward-backward-forward algorithm, which is known in the deterministic literature to be a valuable alternative to Korpelevich’s extragradient method when solving variational inequalities over a convex and closed set governed by pseudomonotone Lipschitz continuous operators. The main computational advantage of Tseng’s algorithm is that it relies only on a single projection step and two independent queries of a stochastic oracle. Our algorithm incorporates a minibatch sampling mechanism and leads to almost sure convergence to an optimal solution. To the best of our knowledge, this is the first stochastic look-ahead algorithm achieving this by using only a single projection at each iteration.

Highlights

  • In this paper, we consider the following variational inequality problem, denoted as VI(T, X ), or VI: given a nonempty closed and convex set X ⊆ Rd and a single-valued map T : Rd → Rd, find x∗ ∈ X such that〈T(x∗), x − x∗〉 ≥ 0, for all x ∈ X . (1)We call S(T, X ) ≡ X ∗ the set of (Stampacchia) solutions of VI(T, X )

  • We developed a stochastic version of Tseng’s forward-backward-forward algorithm for solving stochastic variational inequality problems over nonempty closed and convex sets

  • We show that the known theoretical convergence guarantees of Stochastic Extragradient (SEG) carry over to this setting, but our method consistently outperforms SEG in terms of convergence rate and complexity

Read more

Summary

Introduction

We consider the following variational inequality problem, denoted as VI(T, X ), or VI: given a nonempty closed and convex set X ⊆ Rd and a single-valued map T : Rd → Rd, find x∗ ∈ X such that. If the operator T defined in (2) is known, the expected value formulation can be solved by any standard solution technique for deterministic variational inequalities. The original extragradient scheme of Korpelevich (1976) consists of two projection steps using two evaluations of the deterministic map T at generated test points yn and xn Extending this to the SO case, we arrive at the Stochastic Extragradient (SEG) method. Iusem et al (2017) construct these estimators by relying on a dynamic sampling strategy, where noise reduction of the estimators is achieved via a minibatch sampling of the stochastic operators F(Xn, ξ) and F(Yn, ξ) Within this minibatch formulation, almost sure convergence of the stochastic process (Xn)n∈N to the solution set can be proven even with constant-step-size implementations of SEG.

Preliminaries
Stochastic Forward-Backward-Forward Algorithm
1: Initialize X
Convergence Analysis
Complexity Analysis and Rates
Computational Experiments
Fractional Programming and Applications to Communication Networks
Matrix Games
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.