Abstract

In this paper we consider iterative methods for stochastic variational inequalities (s.v.i.) with monotone operators. Our basic assumption is that the operator possesses both smooth and nonsmooth components. Further, only noisy observations of the problem data are available. We develop a novel Stochastic Mirror-Prox (SMP) algorithm for solving s.v.i. and show that with the convenient stepsize strategy it attains the optimal rates of convergence with respect to the problem parameters. We apply the SMP algorithm to Stochastic composite minimization and describe particular applications to Stochastic Semidefinite Feasibility problem and deterministic Eigenvalue minimization.

Highlights

  • Variational inequalities with monotone operators form a convenient framework for unified treatment of problems with “convex structure”, like convex minimization, convexconcave saddle point problems and convex Nash equilibrium problems

  • Our main development – the Stochastic Mirror Prox (SMP) algorithm – is presented in Section 3. where we provide some general results about its performance

  • Looking at (50), we see that the expected accuracy of the SMP as applied, in the aforementioned manner, to (44) is only by a logarithmic in l pl factor worse: (51)

Read more

Summary

Introduction

Variational inequalities with monotone operators form a convenient framework for unified treatment (including algorithmic design) of problems with “convex structure”, like convex minimization, convexconcave saddle point problems and convex Nash equilibrium problems. The main body of the paper is organized as follows: in Section 2, we describe several special cases of monotone v.i.’s we are especially interested in (convex Nash equilibria, convex-concave saddle point problems, convex minimization). We single out these special cases since here one can define a useful “functional” counterpart ErrN(·) of the just defined error Errvi(·); both ErrN and Errvi will participate in our subsequent efficiency estimates. In that manuscript the proposed approach is applied to bilinear saddle point problems arising in sparse l1 recovery, i.e. to variational inequalities with affine monotone operators F , and the goal is to accelerate the solution process by replacing “computationally expensive” in the large-scale case precise values of F by computationally cheap unbiased random estimates of these values (cf section 4.4 below). This accuracy measure admits a transparent justification: this is the sum, over the players, of the incentives for a player to change his choice given that other players stick to their choices

Special case
Example
Algorithm
Discussion
20. At least the first statement of the following Lemma is well-known
30. We have the following simple corollary of Lemma 3

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.