Abstract

The double-proximal gradient algorithm (DPGA) is a new variant of the classical difference-of-convex algorithm (DCA) for solving difference-of-convex (DC) optimization problems. In this paper, we propose an accelerated version of the double-proximal gradient algorithm for DC programming, in which the objective function consists of three convex modules (only one module is smooth). We establish convergence of the sequence generated by our algorithm if the objective function satisfies the Kurdyka–[Formula: see text]ojasiewicz (K[Formula: see text]) property and show that its convergence rate is not weaker than DPGA. Compared with DPGA, the numerical experiments on an image processing model show that the number of iterations of ADPGA is reduced by 43.57% and the running time is reduced by 43.47% on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call