Abstract

In a Hilbert space setting ${\mathcal{H}}$, in order to minimize by fast methods a general convex lower semicontinuous and proper function ${\Phi }: {\mathcal{H}} \rightarrow \mathbb {R} \cup \{+\infty \}$, we analyze the convergence rate of the inertial proximal algorithms. These algorithms involve both extrapolation coefficients (including Nesterov acceleration method) and proximal coefficients in a general form. They can be interpreted as the discrete time version of inertial continuous gradient systems with general damping and time scale coefficients. Based on the proper setting of these parameters, we show the fast convergence of values and the convergence of iterates. In doing so, we provide an overview of this class of algorithms. Our study complements the previous Attouch–Cabot paper (SIOPT, 2018) by introducing into the algorithm time scaling aspects, and sheds new light on the Guler seminal papers on the convergence rate of the accelerated proximal methods for convex optimization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call