Abstract

With the increasing interest in applying the methodology of difference-of-convex (dc) optimization to diverse problems in engineering and statistics, this paper establishes the dc property of many functions in various areas of applications not previously known to be of this class. Motivated by a quadratic programming based recourse function in two-stage stochastic programming, we show that the (optimal) value function of a copositive (thus not necessarily convex) quadratic program is dc on the domain of finiteness of the program when the matrix in the objective function’s quadratic term and the constraint matrix are fixed. The proof of this result is based on a dc decomposition of a piecewise $$\hbox {LC}^1$$ function (i.e., functions with Lipschitz gradients). Armed with these new results and known properties of dc functions existed in the literature, we show that many composite statistical functions in risk analysis, including the value-at-risk (VaR), conditional value-at-risk (CVaR), optimized certainty equivalent, and the expectation-based, VaR-based, and CVaR-based random deviation functionals are all dc. Adding the known class of dc surrogate sparsity functions that are employed as approximations of the $$\ell _0$$ function in statistical learning, our work significantly expands the classes of dc functions and positions them for fruitful applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call