Clustering analysis is widely used as a pre-process to discover the data distribution for resampling. Existing clustering-based resampling methods mostly run unsupervised clustering on labeled data without taking advantage of the class information to guide the clustering. When there are not enough labeled data, the clustering can hardly capture the underlying data distribution. In this paper, we propose a semi-supervised hybrid resampling (SSHR) method which runs semi-supervised clustering to capture the data distribution for both over-sampling and under-sampling. Firstly, we design a semi-supervised hierarchical clustering algorithm (SSHC) which uses labeled data to guide the clustering procedure on the whole dataset. Specifically, labeled data are used to initialize a clustering model and then guide its updating via an iterative cluster-splitting process. In this way, original classes are divided into multiple disjunct clusters, which contributes o disclosing not only the inter-class imbalance but also the intra-class imbalance. Subsequently, a hybrid resampling is performed according to the result of SSHC Labeled data of the majority class are under-sampled according to the distances to their cluster centroids and the adjacency to minority cluster centroids. Furthermore, we propose a novel over-sampling approach which selects some confident unlabeled data in minority clusters as pseudo-labeled data to enlarge the training set Compared with traditional over-sampling methods, our approach contributes to discovering more about the distribution of the minority class. In order to validate the effectiveness of SSHR, we conduct extensive experiments on 44 benchmark datasets. Our method achieves the best performances in terms of both F-measure and AUC. The Friedman test demonstrates that SSHR significantly outperforms the compared state-of-the-art resampling algorithms.
Read full abstract