Abstract

This paper presents a simple multiple kernel learning framework for complicated data modeling, where randomized multi-scale Gaussian kernels are employed as base kernels and a [Formula: see text]-norm regularizer is integrated as a sparsity constraint for the solution. The randomly pre-chosen scales provide random basis functions with diversity approximation ability and lead to extremely low computational complexity in finding the optimal solution. The random parameter appearing in the probability distribution and the regularizing factor are decided by the training data with cross validation techniques and the combination weights are solved by a well-posed linear system. Comparison experiments on one function approximation and three real-world regression problems of six learning algorithms are carried out. The way that multi-scale kernels fit the objective function is illustrated, the sparsity and the system robustness analysis with respect to the regularizing factor are given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call