Abstract

Ensemble methods, such as the traditional bagging algorithm, can usually improve the performance of a single classifier. However, they usually require large storage space as well as relatively time-consuming predictions. Many approaches were developed to reduce the ensemble size and improve the classification performance by pruning the traditional bagging algorithms. In this article, we proposed a two-stage strategy to prune the traditional bagging algorithm by combining two simple approaches: accuracy-based pruning (AP) and distance-based pruning (DP). These two methods, as well as their two combinations, “AP+DP” and “DP+AP” as the two-stage pruning strategy, were all examined. Comparing with the single pruning methods, we found that the two-stage pruning methods can furthermore reduce the ensemble size and improve the classification. “AP+DP” method generally performs better than the “DP+AP” method when using four base classifiers: decision tree, Gaussian naive Bayes, K-nearest neighbor, and logistic regression. Moreover, as compared to the traditional bagging, the two-stage method “AP+DP” improved the classification accuracy by 0.88%, 4.06%, 1.26%, and 0.96%, respectively, averaged over 28 datasets under the four base classifiers. It was also observed that “AP+DP” outperformed other three existing algorithms Brag, Nice, and TB assessed on 8 common datasets. In summary, the proposed two-stage pruning methods are simple and promising approaches, which can both reduce the ensemble size and improve the classification accuracy.

Highlights

  • Aiming at improving the predictive performance, ensemble methods with bagging [1] and boosting [2, 3] as representatives are in general constructed with a linear combination of a set of fitting models, instead of a single fit of a base classifier or learner [4, 5]

  • We proposed a two-stage bagging pruning approach, which is composed of two independent methods: accuracy-based pruning (AP) and distance-based pruning (DP)

  • For the pruning methods including AP, DP, “AP+DP”, and “DP+AP”, we only reported the accuracy values together with the optimized parameters ta and td, which were listed in parentheses

Read more

Summary

Introduction

Aiming at improving the predictive performance, ensemble methods with bagging [1] and boosting [2, 3] as representatives are in general constructed with a linear combination of a set of fitting models, instead of a single fit of a base classifier or learner [4, 5]. Hothorn and Lausen (2003) [17] proposed a double-bagging method to deal with the problems of variable and model selection bias This approach combined linear discriminant analysis and classification trees to generate ensemble machines. We proposed a two-stage bagging pruning approach, which is composed of two independent methods: accuracy-based pruning (AP) and distance-based pruning (DP) These two methods can be performed by a combination way in any order that comprised the two-stage strategy. For all models established in the traditional bagging, those base models that had the highest prediction performance measured using accuracy (or the lowest error rates) validated on their out-of-bag samples were selected and retained For the latter, i.e., the DP procedure, we utilized the specificity of a test sample to select a part of fitting models in the ensemble.

Preliminaries
Two-Stage Pruning Algorithms for Bagging
Contraceptive Method Choice
Analysis of Experimental Results
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.