Abstract

This paper aims to solve the asymmetric problem of sample classification recognition in extreme class imbalance. Inspired by Krawczyk (2016)’s improvement direction of extreme sample imbalance classification, this paper adopts the AdaBoost model framework to optimize the sample weight update function in each iteration. This weight update not only takes into account the sampling weights of misclassified samples, but also pays more attention to the classification effect of misclassified minority sample classes. Thus, it makes the model more adaptable to imbalanced sample class distribution and the situation of extreme imbalance and make the weight adjustment in hard classification samples more adaptive as well as to generate a symmetry between the minority and majority samples in the imbalanced datasets by adjusting class distribution of the datasets. Based on this, the imbalance boosting model, the Imbalance AdaBoost (ImAdaBoost) model is constructed. In the experimental design stage, ImAdaBoost model is compared with the original model and the mainstream imbalance classification model based on imbalanced datasets with different ratio, including extreme imbalanced dataset. The results show that the ImAdaBoost model has good minority class recognition recall ability in the weakly extreme and general class imbalance sets. In addition, the average recall rate of minority class of the mainstream imbalance classification models is 7% lower than that of ImAdaBoost model in the weakly extreme imbalance set. The ImAdaBoost model ensures that the recall rate of the minority class is at the middle level of the comparison model, and the F1-score comprehensive index performs well, demonstrating the strong stability of the minority class classification in extreme imbalanced dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call