Abstract

We investigate the convergence of a recently popular class of first-order primal–dual algorithms for saddle point problems under the presence of errors in the proximal maps and gradients. We study several types of errors and show that, provided a sufficient decay of these errors, the same convergence rates as for the error-free algorithm can be established. More precisely, we prove the (optimal) Oleft( 1/Nright) convergence to a saddle point in finite dimensions for the class of non-smooth problems considered in this paper, and prove a Oleft( 1/N^2right) or even linear Oleft( theta ^Nright) convergence rate if either the primal or dual objective respectively both are strongly convex. Moreover we show that also under a slower decay of errors we can establish rates, however slower and directly depending on the decay of the errors. We demonstrate the performance and practical use of the algorithms on the example of nested algorithms and show how they can be used to split the global objective more efficiently.

Highlights

  • The numerical solution of nonsmooth optimization problems and the acceleration of their convergence has been regarded a fundamental issue in the past 10–20 years

  • We show that the bounds for inexact primal–dual algorithms established in this paper can be used to make the nested approach viable for entirely non-differentiable problems such as the TV-L1 model, while the results of [61] for partly smooth objectives can be obtained as a special case of the accelerated versions

  • In this paper we investigated the convergence of the class of primal–dual algorithms developed in [15, 18, 54] under the presence of errors occurring in the computation of the proximal points and/or gradients

Read more

Summary

Introduction

The numerical solution of nonsmooth optimization problems and the acceleration of their convergence has been regarded a fundamental issue in the past 10–20 years. Two of the most popular approaches are forward–backward splittings [22, 23, 42], in particular the FISTA method [7, 8], and first-order

Applied Mathematics Münster
Inexact computations of the proximal point
Inexact primal–dual algorithms
The convex case: no acceleration
The convex case: a stronger version
The strongly convex case: primal acceleration
The strongly convex case: dual acceleration
The smooth case
Numerical experiments
Nondifferentiable deblurring with the TV‐L1model
Differentiable deblurring with the TV‐L2 model
Smooth deblurring with the TV‐L2 model
Conclusion and outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call