Abstract

For the treatment of outliers, the paper “Risk-Based Robust Statistical Learning by Stochastic Difference-of-Convex Value-Function Optimization” by Junyi Liu and Jong-Shi Pang proposes a risk-based robust statistical learning model. Employing a variant of the conditional value-at-risk risk measure, called the interval conditional value-at-risk (In-CVaR), the model aims to exclude the risks associated with the left and right tails of the loss. The resulting nonsmooth and nonconvex model considers the population In-CVaR risk and distinguishes the upside and downside losses with asymmetric weights. For the solution of the model in both regression and classification, the authors show that the objective function is the difference of two convex functions each being the optimal objective value of a univariate convex stochastic program. A sampling and convex programming-based algorithm is developed with the appropriate control of incremental sample sizes, and its subsequential almost-sure convergence to a critical point is established. Numerical results illustrate the practical performance of the model and methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call