Abstract

We consider the problem of minimizing a difference-of-convex (DC) function, which can be written as the sum of a smooth convex function with Lipschitz gradient, a proper closed convex function and a continuous possibly nonsmooth concave function. We refine the convergence analysis in Wen et al. (Comput Optim Appl 69, 297–324, 2018) for the proximal DC algorithm with extrapolation ( $$\hbox {pDCA}_e$$ ) and show that the whole sequence generated by the algorithm is convergent without imposing differentiability assumptions in the concave part. Our analysis is based on a new potential function and we assume such a function is a Kurdyka–Łojasiewicz (KL) function. We also establish a relationship between our KL assumption and the one used in Wen et al. (2018). Finally, we demonstrate how the $$\hbox {pDCA}_e$$ can be applied to a class of simultaneous sparse recovery and outlier detection problems arising from robust compressed sensing in signal processing and least trimmed squares regression in statistics. Specifically, we show that the objectives of these problems can be written as level-bounded DC functions whose concave parts are typically nonsmooth. Moreover, for a large class of loss functions and regularizers, the KL exponent of the corresponding potential function are shown to be 1/2, which implies that the $$\hbox {pDCA}_e$$ is locally linearly convergent when applied to these problems. Our numerical experiments show that the $$\hbox {pDCA}_e$$ usually outperforms the proximal DC algorithm with nonmonotone linesearch (Liu et al. in Math Program, 2018. https://doi.org/10.1007/s10107-018-1327-8 , Appendix A) in both CPU time and solution quality for this particular application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call