Abstract

Two inexact versions of a Bregman-function-based proximal method for finding a zero of a maximal monotone operator, suggested in [J. Eckstein (1998). Approximate iterations in Bregman-function-based proximal algorithms. Math. Programming, 83, 113–123; P. da Silva, J. Eckstein and C. Humes (2001). Rescaling and stepsize selection in proximal methods using separable generalized distances. SIAM J. Optim., 12, 238–261], are considered. For a wide class of Bregman functions, including the standard entropy kernel and all strongly convex Bregman functions, convergence of these methods is proved under an essentially weaker accuracy condition on the iterates than in the original papers. 1 Concerning , they coincide with A1–A3 in Section 2. Also the error criterion of a logarithmic–quadratic proximal method, developed in [A. Auslender, M. Teboulle and S. Ben-Tiba (1999). A logarithmic-quadratic proximal method for variational inequalities. Computational Optimization and Applications, 12, 31–40], is relaxed, and convergence results for the inexact version of the proximal method with entropy-like distance functions are described. For the methods mentioned, like in [R.T. Rockafellar (1976). Monotone operators and the proximal point algorithm. SIAM J. Control Optim., 14, 877–898] for the classical proximal point algorithm, only summability of the sequence of error vector norms is required.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call