Abstract

In this paper we present a relaxed version of an extragradient–proximal point algorithm recently proposed by Solodov and Svaiter for finding a zero of a maximal monotone operator defined on a Hilbert space. The aim is to introduce a family of parameters in order to accelerate the rate of convergence of this algorithm. First we study the convergence and rate of convergence of the relaxed algorithms and then we apply them to the generalized variational inequality problem. For this problem, the operator is a sum of two operators: the first one is single-valued, monotone and continuous and the second one is the subdifferential of a nonsmooth lower semicontinuous proper convex function ϕ. To make the subproblems easier to solve, we consider, as in the bundle methods, piecewise linear convex approximations of ϕ. We explain how to construct these approximations and how the subproblems fall within the framework of our relaxed extragradient–proximal point algorithm. We prove the convergence of the resulting algorithm without assuming a Dunn property on the single-valued operator. Finally, we report some numerical experiences to illustrate the behavior of our implementable algorithm for different values of the relaxation factor. A comparison with another algorithm is also given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call