A Resampling-Free Stochastic Projection Contraction Algorithm for Solving Stochastic Variational Inequalities

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Sampling is a major computational bottleneck in stochastic algorithms. This paper proposes a stochastic projection contraction algorithm for stochastic variational inequality problems, significantly reducing runtime by eliminating resampling in the correction step. We introduce an adjustable offset weight to optimize search direction, along with different adaptive step size strategies in prediction and correction steps. We further present discrete differential equation interpretations for specific offset weight values. To address bias due to the absence of resampling in the correction step, we develop an error control scheme and provide convergence guarantees. Numerical experiments demonstrate the algorithm’s efficiency

Similar Papers
  • Research Article
  • 10.1080/02331934.2025.2559891
An inertial-type stochastic self-adaptive algorithm for stochastic pseudomonotone variational inequality problem
  • Sep 17, 2025
  • Optimization
  • Zhaoli Ma + 4 more

In this paper, a new stochastic self-adaptive subgradient extragradient approximation algorithm incorporated inertial technique is proposed to solve the stochastic pseudomonotone variational inequality problem. The convergence, convergence rate and oracle complexity of the algorithm are investigated. A numerical example illustrates the effectiveness of the new algorithm. The numerical results show that our algorithm is competitive with other related algorithms in the literature [Yang et al. Variance-based modified backward-forward algorithm with line search for stochastic variational inequality problems and its applications. Asia-Pac J Oper Res. 2020;37(3):2050011] and [Wang et al. A self-adaptive stochastic subgradient extragradient algorithm for the stochastic pseudomonotone variational inequality problem with application. Z Angew Math Phys. 2022;73(4):164]. Finally, the main results obtained are applied to solve image restoration problem.

  • Research Article
  • Cite Count Icon 37
  • 10.1007/s11228-018-0472-9
On Stochastic Mirror-prox Algorithms for Stochastic Cartesian Variational Inequalities: Randomized Block Coordinate and Optimal Averaging Schemes
  • Mar 20, 2018
  • Set-Valued and Variational Analysis
  • Farzad Yousefian + 2 more

Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequality problems where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large and develop a randomized block stochastic mirror-prox algorithm, where at each iteration only a randomly selected block coordinate of the solution vector is updated through implementing two consecutive projection steps. We show that when the mapping is strictly pseudo-monotone, the algorithm generates a sequence of iterates that converges to the solution of the problem almost surely. When the maps are strongly pseudo-monotone, we prove that the mean-squared error diminishes at the optimal rate. Second, we consider large-scale stochastic optimization problems with convex objectives and develop a new averaging scheme for the randomized block stochastic mirror-prox algorithm. We show that by using a different set of weights than those employed in the classical stochastic mirror-prox methods, the objective values of the averaged sequence converges to the optimal value in the mean sense at an optimal rate. Third, we consider stochastic Cartesian variational inequality problems and develop a stochastic mirror-prox algorithm that employs the new weighted averaging scheme. We show that the expected value of a suitably defined gap function converges to zero at an optimal rate.

  • Research Article
  • Cite Count Icon 4
  • 10.1080/02331934.2024.2312198
Stochastic Bregman extragradient algorithm with line search for stochastic mixed variational inequalities
  • Feb 3, 2024
  • Optimization
  • Xian-Jun Long + 2 more

In this paper, we present a stochastic Bregman extragradient algorithm with line search for solving a class of stochastic mixed variational inequality, which not only does not require any information of the Lipschitz constant but allows to search a potentially larger step size per iteration. Compared with the existing algorithms, the proposed algorithm allows different step sizes in the prediction and correction steps, thus enhancing the algorithm's flexibility. Under the generalized monotonicity, we derive the almost sure convergence, the iteration complexity O ( 1 / ϵ ) and the oracle complexity O ( 1 / ϵ 2 ) for our algorithm. Furthermore, under the generalized strong monotonicity and the sample size increases at a geometric rate, the linear convergence rate of the proposed algorithm with respect to the Bregman distance between the iterations and solution is established. Numerical results demonstrate a favourable comparison of the proposed algorithm with existing ones.

  • Research Article
  • Cite Count Icon 18
  • 10.1007/s10589-020-00185-z
Quantitative analysis for a class of two-stage stochastic linear variational inequality problems
  • Mar 21, 2020
  • Computational Optimization and Applications
  • Jie Jiang + 2 more

This paper considers a class of two-stage stochastic linear variational inequality problems whose first stage problems are stochastic linear box-constrained variational inequality problems and second stage problems are stochastic linear complementary problems having a unique solution. We first give conditions for the existence of solutions to both the original problem and its perturbed problems. Next we derive quantitative stability assertions of this two-stage stochastic problem under total variation metrics via the corresponding residual function. Moreover, we study the discrete approximation problem. The convergence and the exponential rate of convergence of optimal solution sets are obtained under moderate assumptions respectively. Finally, through solving a non-cooperative game in which each player’s problem is a parameterized two-stage stochastic program, we numerically illustrate our theoretical results.

  • Research Article
  • Cite Count Icon 17
  • 10.1007/s10957-019-01578-9
An Infeasible Stochastic Approximation and Projection Algorithm for Stochastic Variational Inequalities
  • Aug 16, 2019
  • Journal of Optimization Theory and Applications
  • Xiao-Juan Zhang + 3 more

In this paper, we consider a stochastic variational inequality, in which the mapping involved is an expectation of a given random function. Inspired by the work of He (Appl Math Optim 35:69–76, 1997) and the extragradient method proposed by Iusem et al. (SIAM J Optim 29:175–206, 2019), we propose an infeasible projection algorithm with line search scheme, which can be viewed as a modification of the above-mentioned method of Iusem et al. In particular, in the correction step, we replace the projection by computing search direction and stepsize, that is, we need only one projection at each iteration, while the method of Iusem et al. requires two projections at each iteration. Moreover, we use dynamic sampled scheme with line search to cope with the absence of Lipschitz constant and choose the stepsize to be bounded away from zero and the direction to be a descent direction. In the process of stochastic approximation, we iteratively reduce the variance of a stochastic error. Under appropriate assumptions, we derive some properties related to convergence, convergence rate, and oracle complexity. In particular, compared with the method of Iusem et al., our method uses less projections and has the same iteration complexity, which, however, has a higher oracle complexity for a given tolerance in a finite dimensional space. Finally, we report some numerical experiments to show its efficiency.

  • PDF Download Icon
  • Research Article
  • 10.3390/math11153376
Deterministic Bi-Criteria Model for Solving Stochastic Mixed Vector Variational Inequality Problems
  • Aug 2, 2023
  • Mathematics
  • Meiju Luo + 2 more

In this paper, we consider stochastic mixed vector variational inequality problems. Firstly, we present an equivalent form for the stochastic mixed vector variational inequality problems. Secondly, we present a deterministic bi-criteria model for giving the reasonable resolution of the stochastic mixed vector variational inequality problems and further propose the approximation problem for solving the given deterministic model by employing the smoothing technique and the sample average approximation method. Thirdly, we obtain the convergence analysis for the proposed approximation problem while the sample space is compact. Finally, we propose a compact approximation method when the sample space is not a compact set and provide the corresponding convergence results.

  • Research Article
  • Cite Count Icon 4
  • 10.1080/00036811.2020.1836352
Quantitative stability of two-stage stochastic linear variational inequality problems with fixed recourse
  • Oct 20, 2020
  • Applicable Analysis
  • Jianxun Liu + 2 more

This paper focus on the quantitative stability of a class of two-stage stochastic linear variational inequality problems whose second stage problems are stochastic linear complementarity problems with fixed recourse matrix. Firstly, we discuss the existence of solutions to this two-stage stochastic problems and its perturbed problems. Then, by using the corresponding residual function, we derive the quantitative stability of this two-stage stochastic problem under Fortet-Mourier metric. Finally, we study the sample average approximation problem, and obtain the convergence of optimal solution sets under moderate assumptions.

  • Research Article
  • 10.1007/s10898-024-01445-6
Stochastic golden ratio algorithm to non-convex stochastic mixed variational inequality problem
  • Nov 5, 2024
  • Journal of Global Optimization
  • Shenghua Wang + 2 more

Stochastic golden ratio algorithm to non-convex stochastic mixed variational inequality problem

  • Research Article
  • Cite Count Icon 7
  • 10.1007/s00033-022-01730-y
A self-adaptive stochastic subgradient extragradient algorithm for the stochastic pseudomonotone variational inequality problem with application
  • Jul 12, 2022
  • Zeitschrift für angewandte Mathematik und Physik
  • Shenghua Wang + 3 more

A self-adaptive stochastic subgradient extragradient algorithm for the stochastic pseudomonotone variational inequality problem with application

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.camwa.2024.03.025
An accelerated stochastic extragradient-like algorithm with new stepsize rules for stochastic variational inequalities
  • Mar 29, 2024
  • Computers & Mathematics with Applications
  • Liya Liu + 1 more

An accelerated stochastic extragradient-like algorithm with new stepsize rules for stochastic variational inequalities

  • Research Article
  • Cite Count Icon 94
  • 10.1007/s10107-017-1175-y
On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
  • Jul 25, 2017
  • Mathematical Programming
  • Farzad Yousefian + 2 more

Traditionally, most stochastic approximation (SA) schemes for stochastic variational inequality (SVI) problems have required the underlying mapping to be either strongly monotone or monotone and Lipschitz continuous. In contrast, we consider SVIs with merely monotone and non-Lipschitzian maps. We develop a regularized smoothed SA (RSSA) scheme wherein the stepsize, smoothing, and regularization parameters are reduced after every iteration at a prescribed rate. Under suitable assumptions on the sequences, we show that the algorithm generates iterates that converge to the least norm solution in an almost sure sense, extending the results in Koshal et al. (IEEE Trans Autom Control 58(3):594–609, 2013) to the non-Lipschitzian regime. Additionally, we provide rate estimates that relate iterates to their counterparts derived from a smoothed Tikhonov trajectory associated with a deterministic problem. To derive non-asymptotic rate statements, we develop a variant of the RSSA scheme, denoted by aRSSA $$_r$$ , in which we employ a weighted iterate-averaging, parameterized by a scalar r where $$r = 1$$ provides us with the standard averaging scheme. The main contributions are threefold: (i) when $$r<1$$ and the parameter sequences are chosen appropriately, we show that the averaged sequence converges to the least norm solution almost surely and a suitably defined gap function diminishes at an approximate rate $$\mathcal{O}({1}\slash {\root 6 \of {k}})$$ after k steps; (ii) when $$r<1$$ , and smoothing and regularization are suppressed, the gap function admits the rate $$\mathcal{O}({1}\slash {\sqrt{k}})$$ , thus improving the rate $$\mathcal{O}(\ln (k)/\sqrt{k})$$ under standard averaging; and (iii) we develop a window-based variant of this scheme that also displays the optimal rate for $$r < 1$$ . Notably, we prove the superiority of the scheme with $$r < 1$$ with its counterpart with $$r=1$$ in terms of the constant factor of the error bound when the size of the averaging window is sufficiently large. We present the performance of the developed schemes on a stochastic Nash–Cournot game with merely monotone and non-Lipschitzian maps.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1186/s13660-017-1529-2
Robust solutions to box-constrained stochastic linear variational inequality problem
  • Oct 10, 2017
  • Journal of Inequalities and Applications
  • Mei-Ju Luo + 1 more

We present a new method for solving the box-constrained stochastic linear variational inequality problem with three special types of uncertainty sets. Most previous methods, such as the expected value and expected residual minimization, need the probability distribution information of the stochastic variables. In contrast, we give the robust reformulation and reformulate the problem as a quadratically constrained quadratic program or convex program with a conic quadratic inequality quadratic program, which is tractable in optimization theory.

  • Research Article
  • Cite Count Icon 2
  • 10.1155/2020/1203627
Convergence Analysis of the Approximation Problems for Solving Stochastic Vector Variational Inequality Problems
  • Oct 8, 2020
  • Complexity
  • Meiju Luo + 1 more

In this paper, we consider stochastic vector variational inequality problems (SVVIPs). Because of the existence of stochastic variable, the SVVIP may have no solutions generally. For solving this problem, we employ the regularized gap function of SVVIP to the loss function and then give a low-risk conditional value-at-risk (CVaR) model. However, this low-risk CVaR model is difficult to solve by the general constraint optimization algorithm. This is because the objective function is nonsmoothing function, and the objective function contains expectation, which is not easy to be computed. By using the sample average approximation technique and smoothing function, we present the corresponding approximation problems of the low-risk CVaR model to deal with these two difficulties related to the low-risk CVaR model. In addition, for the given approximation problems, we prove the convergence results of global optimal solutions and the convergence results of stationary points, respectively. Finally, a numerical experiment is given.

  • Research Article
  • 10.4156/aiss.vol4.issue3.9
Stochastic Variational Inequality for Supply Chain Network
  • Feb 29, 2012
  • INTERNATIONAL JOURNAL ON Advances in Information Sciences and Service Sciences
  • Bing Liang - + 1 more

In this paper, we propose a stochastic variational inequality approach for a supply chain network, in which the cost functions (including both the production function and the transaction function) and the pricing cost function are contaminated by stochastic parameters. The proposed network structure of the supply chain is identified and the stochastic variational inequality model is derived for the supply chain network. A sampling approximation algortihm is proposed to solve the resulting stochastic variational inequality problem by combining Quasi-Monte Carlo sampling method and homogeneous interior point method. The global convergence of the algorithm is proved and a preliminary example is given to show the efficiency of the proposed method.

  • Conference Article
  • Cite Count Icon 41
  • 10.1109/cdc.2014.7040302
Optimal robust smoothing extragradient algorithms for stochastic variational inequality problems
  • Dec 1, 2014
  • Farzad Yousefian + 2 more

We consider stochastic variational inequality problems where the mapping is monotone over a compact convex set. We present two robust variants of stochastic extragradient algorithms for solving such problems. Of these, the first scheme employs an iterative averaging technique where we consider a generalized choice for the weights in the averaged sequence. Our first contribution is to show that using an appropriate choice for these weights, a suitably defined gap function attains the optimal rate of convergence of O(1 over √k). In the second part of the paper, under an additional assumption of weak-sharpness, we update the stepsize sequence using a recursive rule that leverages problem parameters. The second contribution lies in showing that employing such a sequence, the extragradient algorithm possesses almost-sure convergence to the solution as well as convergence in a mean-squared sense to the solution of the problem at the rate O(1 over k). Motivated by the absence of a Lipschitzian parameter, in both schemes we utilize a locally randomized smoothing scheme. Importantly, by approximating a smooth mapping, this scheme enables us to estimate the Lipschitzian parameter. The smoothing parameter is updated per iteration and we show convergence to the solution of the original problem in both algorithms.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.