Abstract

Sparse representation of kernel based regression (KBR) has received considerable attention in recent years. Studies on sparse KBR can be divided into two distinct groups, namely (i) pruning-based methods that remove the training samples with the least training errors and retrain the remaining training samples, and (ii) direct methods that begin with a full-dense solution and delete training data according to objective criteria. Pruning-based methods give rise to a high computation time, while direct methods may lead to non-optimal solutions and thus a poor approximation. In addition, most current KBR models assume that the error distribution is Gaussian. However, observations in many practical applications indicate that the noise models do not satisfy the Gaussian error distribution. In such cases, current KBR models are not optimal. To address the above-mentioned problems, this study proposes a new sparse KBR framework for general noise distributions, including the epsilon-insensitive noise family. Compared with other sparse algorithms, sparsity is directly imposed by epsilon-insensitive convex loss functions derived from the theoretical framework of the Bayesian approach within the scope of regularization networks, and then handles the optimization problem in Lagrangian form. Experiments on artificial and real-life benchmark datasets demonstrate that the proposed epsilon-insensitive KBR models are more effective and efficient than pruning-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call