Abstract

Kernel partial least squares (KPLS) has been known as a generic kernel regression method and proven to be competitive with other kernel regression methods such as support vector machines for regression (SVM) and kernel ridge regression. Kernel boosted latent features (KBLF) is a variant of KPLS for any differentiable convex loss functions. It provides a more flexible framework for various predictive modeling tasks such as classification with logistic loss and robust regression with L1 norm loss, etc. However, KPLS and KBLF solutions are dense and thus not suitable for large-scale computations. Sparsification of KPLS solutions has been studied for dual and primal forms. For dual sparsity, it requires solving a nonlinear optimization problem at every iteration step and its computational burden limits its applicability to general regression tasks.In this paper, we propose simple heuristics to approximate sparse solutions for KPLS and the framework is also applied for sparsifying KBLF solutions. The algorithm provides an interesting path from a maximum residual criterion based algorithm with orthogonality conditions to the dense KPLS/KBLF. With the orthogonality, it differentiates itself from many existing forward selection-type algorithms. The computational advantage is illustrated by benchmark datasets and comparison to SVM is done.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call