Abstract

Employing the ideas of non-linear preconditioning and testing of the classical proximal point method, we formalise common arguments in convergence rate and convergence proofs of optimisation methods to the verification of a simple iteration-wise inequality. When applied to fixed point operators, the latter can be seen as a generalisation of firm non-expansivity or the $\alpha$-averaged property. The main purpose of this work is to provide the abstract background theory for our companion paper "Block-proximal methods with spatially adapted acceleration". In the present account we demonstrate the effectiveness of the general approach on several classical algorithms, as well as their stochastic variants. Besides, of course, the proximal point method, these method include the gradient descent, forward--backward splitting, Douglas--Rachford splitting, Newton's method, as well as several methods for saddle-point problems, such as the Alternating Directions Method of Multipliers, and the Chambolle--Pock method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call