Abstract

The Difference of Convex functions Algorithm (DCA) is widely used for minimizing the difference of two convex functions. A recently proposed accelerated version, termed BDCA for Boosted DC Algorithm, incorporates a line search step to achieve a larger decrease of the objective value at each iteration. Thanks to this step, BDCA usually converges much faster than DCA in practice. The solutions found by DCA are guaranteed to be critical points of the problem, but these may not be local minima. Although BDCA tends to improve the objective value of the solutions it finds, these are frequently just critical points as well. In this paper we combine BDCA with a simple Derivative-Free Optimization (DFO) algorithm to force the d-stationarity (lack of descent direction) at the point obtained. The potential of this approach is illustrated through some computational experiments on a Minimum-Sum-of-Squares clustering problem. Our numerical results demonstrate that the new method provides better solutions while still remains faster than DCA in the majority of test cases.

Highlights

  • Problem (P) can be tackled by the well-known DC algorithm (DCA) [14, 15]

  • The Boosted DC algorithm (BDCA) performs a line search at the point generated by the classical Difference of Convex functions Algorithm (DCA), which allows to achieve a larger decrease in the objective value at each iteration

  • The aim of this paper is to show that it is possible to combine BDCA with a simple DFO (Derivative-Free Optimization) routine to guarantee d-stationarity at the limit point obtained by the algorithm

Read more

Summary

Preliminaries

Throughout this paper, x, y denotes t√he inner product of x, y ∈ Rm, and · corresponds to the induced norm given by x = x, x. Rm → R ∪ {+∞}, the set dom f := {x ∈ Rm | f (x) < +∞} denotes the (effective) domain of f. A function f is proper if its domain is nonempty. The function f is coercive if f (x) → +∞ whenever x → +∞, and it is said to be convex if f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) for all x, y ∈ Rm and λ ∈ [0, 1]. For any convex function f , the subdifferential of f at x ∈ Rm is the set. We recall some preliminary notions and basic results which will be used in the sequel

Basic Assumptions
Optimality Conditions
DCA and Boosted DCA
Positive Spanning Sets
Forcing BDCA to Converge to d-Stationary Points
Numerical Experiments
Findings
Concluding remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call