Abstract

Many real-world machine learning tasks have very limited labeled data but a large amount of unlabeled data. To take advantage of the unlabeled data for enhancing learning performance, several semi-supervised learning techniques have been developed. In this paper, we propose a novel semi-supervised ensemble learning algorithm, termed Multi-Train, which generates a number of heterogeneous classifiers that use different classification models and/or different features. During the training process, each classifier is refined using unlabeled data, which are labeled by the majority prediction of the rest classifiers. We hypothesize that the use of different models and different input features can promote the diversity of the ensemble, thereby improving the performance compared to existing methods such as the co-training and tri-training algorithms. Experimental results on the UCI datasets clearly demonstrated the effectiveness of using heterogeneous ensembles in semi-supervised learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.