A new robust loss function (called Lq-loss) is proposed based on the concept of quantile and correntropy, which can be seen as an improved version of quantile loss function. The proposed Lq-loss has some important properties such as asymmetry, non-convexity and boundedness, which has received a lot of attention recently. The Lq-loss includes and extends the traditional loss functions such as pinball loss, rescaled hinge loss, L1-norm loss and zero-norm loss. Additionally, we demonstrate that the Lq-loss is a kernel-induced loss by reproducing piecewise kernel function. Further, two robust SVM frameworks are presented to handle robust classification and regression problems by applying Lq-loss to support vector machine, respectively. Last but not least, we demonstrate that the proposed classification framework satisfies Bayes’ optimal decision rule. However, the non-convexity of the proposed Lq-loss makes it difficult to optimize. A non-convex optimization method, concave–convex procedure (CCCP) technique, is used to solve the proposed models, and the convergence of the algorithms is proved theoretically. For classification and regression tasks, experiments are carried out on three databases including UCI benchmark datasets, artificial datasets and a practical application dataset. Compared to some classical and advanced methods, numerical simulations under different noise setting and different evaluation criteria show that the proposed methods have good robustness to feature noise and outliers in both classification and regression applications.
Read full abstract