Abstract
We propose in this paper a unifying scheme for several algorithms from the literature dedicated to the solving of monotone inclusion problems involving compositions with linear continuous operators in infinite dimensional Hilbert spaces. We show that a number of primal-dual algorithms for monotone inclusions and also the classical ADMM numerical scheme for convex optimization problems, along with some of its variants, can be embedded in this unifying scheme. While in the first part of the paper, convergence results for the iterates are reported, the second part is devoted to the derivation of convergence rates obtained by combining variable metric techniques with strategies based on suitable choice of dynamical step sizes. The numerical performances, which can be obtained for different dynamical step size strategies, are compared in the context of solving an image denoising problem.
Highlights
Introduction and preliminariesConsider the convex optimization problem inf {f (x) + g(Lx) + h(x)}, (1)x∈H where H and G are real Hilbert spaces, f : H → R := R ∪ {±∞} and g : G → R are proper, convex and lower semicontinuous functions, h : H → R is a convex and Frechet differentiable function with Lipschitz continuous gradient and L : H → G is a linear continuous operator.Due to numerous applications in fields like signal and image processing, portfolio optimization, cluster analysis, location theory, network communication, and machine learning, the design and investigation of numerical algorithms for solving convex optimization problems of type (1) attracted in the last couple of years huge interest from the applied mathematics community
Proximal splitting algorithms for solving convex optimization problems involving compositions with linear continuous operators have been proposed by Combettes and Ways [19], Esser et al [26], Chambolle and Pock [14], and He and Yuan [32]
The aim of this paper is to provide a unifying algorithmic scheme for solving monotone inclusion problems, which encompasses several primal-dual iterative methods [8, 14, 20, 41] and the ADMM algorithm in the particular case of convex optimization problems
Summary
X∈H where H and G are real Hilbert spaces, f : H → R := R ∪ {±∞} and g : G → R are proper, convex and lower semicontinuous functions, h : H → R is a convex and Frechet differentiable function with Lipschitz continuous gradient and L : H → G is a linear continuous operator. Briceno-Arias and Combettes pioneered this approach in [13], by reformulating the general monotone inclusion in an appropriate product space as the sum of a maximally monotone operator and a linear and skew one, and by solving the resulting inclusion problem via a forward-backward-forward type algorithm (see [16]). The aim of this paper is to provide a unifying algorithmic scheme for solving monotone inclusion problems, which encompasses several primal-dual iterative methods [8, 14, 20, 41] and the ADMM algorithm (and its variants from [38]) in the particular case of convex optimization problems. We derive convergence rates for the iterates under supplementary strong monotonicity assumptions To this aim, we use a dynamic step strategy, based on which we can provide a unifying scheme for the algorithms in [9, 14]. This property implies that ∂f is γ -strongly monotone (see [4, Example 22.3])
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.