Abstract

The Adaptive Boosting (AdaBoost) algorithm is a widely used ensemble learning framework, and it can get good classification results on general datasets. However, it is challenging to apply the AdaBoost algorithm directly to imbalanced data since it is designed mainly for processing misclassified samples rather than samples of minority classes. To better process imbalanced data, this paper introduces the indicator Area Under Curve (AUC) which can reflect the comprehensive performance of the model, and proposes an improved AdaBoost algorithm based on AUC (AdaBoost-A) which improves the error calculation performance of the AdaBoost algorithm by comprehensively considering the effects of misclassification probability and AUC. To prevent redundant or useless weak classifiers the traditional AdaBoost algorithm generated from consuming too much system resources, this paper proposes an ensemble algorithm, PSOPD-AdaBoost-A, which can re-initialize parameters to avoid falling into local optimum, and optimize the coefficients of AdaBoost weak classifiers. Experiment results show that the proposed algorithm is effective for processing imbalanced data, especially the data with relatively high imbalances.

Highlights

  • Since imbalanced data can be found in any area, effective classification of imbalanced data has become critical for many applications

  • Seven comparison in terms of accuracy, precision, F1 value classifying the PC3 ensemble algorithm is more effective in processing imbalanced data compared tomaximum, many improved dataset, and its recall is lower than the Smote method

  • We propose an improved AdaBoost algorithm (AdaBoost-A)

Read more

Summary

Introduction

Since imbalanced data can be found in any area, effective classification of imbalanced data has become critical for many applications. The AdaBoost algorithm can be directly used to process imbalanced data, the algorithm focuses more on the misclassified samples than samples of minority class It may generate many redundant or useless weak classifiers, increasing the processing overhead and causing performance reduction. Yang et al [14] used mathematical analysis and graphical methods to clarify the working principle of multiclass AdaBoost, and proposed a novel approach for processing multiclass data This algorithm reduces the requirements of weak classifiers, and ensures the effectiveness of the classification. Guo et al [18] treated samples of majority class that exceeded the threshold during the iteration as noise, and proposed four algorithms (i.e., A-AdaBoost, B-AdaBoost, C-AdaBoost and D-AdaBoost) based on limiting threshold growth and modifying class labels Results show that these algorithms can effectively process imbalanced data.

Background
The Proposed Approach
The AdaBoost-A Algorithm
The PSOPD-AdaBoost-A Ensemble Algorithm
Test Data
Analysis of the AdaBoost-A Algorithm of 19
10. Figure
Performance Comparison
When the number weak classifiers reaches
Results show that the
Analysis of the PSOPD-AdaBoost-A Ensemble Algorithm
13. Performance
Comparison the PSOPD-AdaBoost-A and Other Improved Algorithms
Comparison
Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call