Abstract

Ensemble learning aims at combining several slightly different learners to construct stronger learner. Ensemble of a well selected subset of learners would outperform than ensemble of all. However, the well studied accuracy / diversity ensemble pruning framework would lead to over fit of training data, which results a target learner of relatively low generalization ability. We propose to ensemble with base learners trained by both labeled and unlabeled data, by adopting data dependant kernel mapping, which has been proved successful in semi-supervised learning, to get more generalized base learners. We bootstrap both training data and unlabeled data, namely point cloud, to build slight different data set, then construct data dependant kernel. With such kernels data point can be mapped to different feature space which results effective ensemble. We also proof that ensemble of learners trained by both labeled and unlabeled data is of better generalization ability in the meaning of graph Laplacian. Experiments on UCI data repository show the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call