Abstract

AbstractClass imbalance learning (CIL) has become one of the most challenging research topics. In this article, we propose a Boosted co‐training method to modify the class distribution so that traditional classifiers can be readily adapted to imbalanced datasets. This article is among the first to utilize pseudo‐labelled data of co‐training to enlarge the training set of minority classes. Compared with existing oversampling methods which generate minority samples based on labelled data, the proposed method has the ability to learn from unlabelled data and then decrease the risk of overfitting. Furthermore, we propose a boosting‐style technique which implicitly modifies the class distribution and combines it with co‐training to alleviate the bias towards majority classes. Finally, we collect two series of classifiers generated during Boosted co‐training to build an ensemble for the classification. It further improves the CIL performance by leveraging the strength of ensemble learning. By taking advantage of the diversity of co‐training, we also contribute a new approach to generating base classifiers for ensemble learning. The proposed method is compared with eight state‐of‐the‐art CIL methods on a variety of benchmark datasets. Measured by G‐Mean, F‐Measure, and AUC, Boosted co‐training achieves the best performances and average ranks on 18 benchmark datasets. The experimental results demonstrate the significant superiority of Boosted co‐training over other CIL methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call