Abstract

We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators. Besides this, we formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient. Moreover, in each iteration, we allow the use of variable metrics, while the investigations are carried out in the setting of infinite-dimensional Hilbert spaces. This algorithmic scheme is investigated from the point of view of its convergence properties.

Highlights

  • One of the most popular numerical algorithms for solving optimization problems of the form inf {f (x) + g(Ax)}, (1)x∈Rn where f : Rn → R := R ∪ {±∞} and g : Rm → R are proper, convex, lower semicontinuous functions and A : Rn → Rm is a linear operator, is the alternating direction method of multipliers (ADMM)

  • We investigate the techniques and ideas used in Shefi and Teboulle (SIAM J Optim 24(1), 269–297, 2014) in the convergence analysis of two proximal ADMM algorithms for solving convex optimization problems involving compositions with linear operators

  • We formulate a variant of the ADMM algorithm that is able to handle convex optimization problems involving an additional smooth function in its objective, and which is evaluated through its gradient

Read more

Summary

Introduction

One of the most popular numerical algorithms for solving optimization problems of the form inf {f (x) + g(Ax)},. We prove an ergodic convergence rate result for this algorithm involving a primal-dual gap function formulated in terms of the associated Lagrangian l and a convergence result for the sequence of iterates to a saddle point of l. We propose an extension of the ADMM algorithm considered in [28] that we investigate from the perspective of its convergence properties This extension is twofold: on the one hand, we consider an additional convex differentiable function in the objective of the optimization problem (1), which is evaluated in the algorithm through its gradient, and on the other hand, instead of fixed matrices M1 and M2, we use different matrices in each iteration. Primal-dual algorithms with dynamic step sizes have been investigated in [13] and [9], where it has been shown that clever strategies in the choice of the step sizes can improve the convergence behavior

Ergodic convergence rates for the primal-dual gap
Convergence of the sequence of generated iterates
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call