Abstract

We present a new co-training style framework and combine it with ensemble learning to further improve the generalization ability. By employing different strategies to combine co-training with ensemble learning, two learning algorithms, Sequential Ensemble Co-Learning (SECL) and Parallel Ensemble Co-Learning (PECL) are developed. Furthermore, we propose a weighted bagging method in PECL to generate an ensemble of diverse classifiers at the end of co-training. Finally, based on the voting margin, an upper bound on the generalization error of multi-classifier voting systems is given in the presence of both classification noise and distribution noise. Experimental results on six datasets show that our method performs better than other compared algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call