Abstract

The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimisation problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka–Łojasiewicz property. In the last part, we apply the algorithm to an image processing model.

Highlights

  • The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997

  • We show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem

  • Convergence of the iterates is shown if this objective function satisfies the Kurdyka–Łojasiewicz property

Read more

Summary

Introduction

Optimisation problems where the objective function can be written as a difference of two convex functions arise naturally in several applications, such as image processing [1], machine learning [2], optimal transport [3] and sparse signal recovering [4]. We go one step further by proposing an algorithm, where both the convex and concave parts are evaluated via proximal steps. We consider a linear operator in the concave part of the objective function, which is evaluated in a forward manner in the spirit of primal-dual splitting methods. 3, we propose a double-proximal d.c. algorithm, which generates both a primal and a dual sequence of iterates and show several properties which make it comparable to DCA. We prove a descent property for the objective function values of a primal-dual formulation and that every cluster point of the sequence of. 4, we show global convergence of our algorithm and convergence rates for the iterates in certain cases, provided that the objective function of the primal-dual reformulation satisfies the Kurdyka–Łojasiewicz property; in other words, it is a KŁ function. We close our paper with some numerical examples addressing an image deblurring and denoising problem in the context of different d.c. regularisations

Notation and preliminaries
Problem statement
The algorithm
Convergence under Kurdyka–Łojasiewicz assumptions
The case when is a KŁ function
Convergence rates
Application to image processing
The proximal point of the anisotropic total variation
Numerical results

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.