Abstract
The alternating direction method of multipliers (ADMM) is one of most foundational algorithms for linear constrained composite minimization problems. For different specific problems, variations of ADMM (like linearized ADMM, proximal ADMM) are developed. By using the Bregman distances, lots of ADMMs can be formulated into a uniform mathematical scheme. Although variational inequalities have been well used to study ADMMs, the use for BADMM has still been missing. In this paper, we study the convergence of BADMM by variational inequalities. We present a proof framework for BADMMs. And then, we present very concise convergence proof for the basic BADMM. As applications, we consider several variations of BADMM and obtain corresponding convergence results.
Highlights
Composite optimizations with linear constraints are ubiquitous in different disciplines and applications
We propose several variations of BADMM and present very concise convergence analyses for BADMM and its variations by the proposed framework
VARIATIONAL INEQUALITIES We present several Variational Inequalities (VI) for problem (1)
Summary
Composite optimizations with linear constraints are ubiquitous in different disciplines and applications. X∈X ,y∈Y where θ1 and θ2 are proper, closed and convex but may be indifferentiable, and X ⊆ Rn1 , and Y ⊆ Rn2 , and A ∈ Rm×n1 , and B ∈ Rm×n2 , and b ∈ Rm. A classical and efficient solver is the Alternating Direction Method of Multipliers (ADMM) algorithm [7], [8], [17], [19]. ADMM focuses on the augmented Lagrange problem of (1) rather than its original problem, that is, Lβ (x, y, λ) := θ1(x) + θ2(y) − λ, Ax + By − b β +. The ADMM algorithm employs alternating strategy: it minimizes only one variable and fixes others; while the variable λ is updated by a feedback strategy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have