Abstract

This paper investigates the performance enhancement of base classifiers within the AdaBoost framework applied to medical datasets. Adaptive boosting (AdaBoost), being an instance of boosting, combines other classifiers to enhance their performance. We conducted a comprehensive experiment to assess the efficacy of twelve base classifiers with the AdaBoost framework, namely, Bayes network, decision stump, ZeroR, decision tree, Naïve Bayes, J-48, voted perceptron, random forest, bagging, random tree, stacking, and AdaBoost itself. The experiments are carried out on five datasets from the medical domain based on various types of cancers, i.e., global cancer map (GCM), lymphoma-I, lymphoma-II, leukaemia, and embryonal tumours. The evaluation focuses on the accuracy, precision, and efficiency of the base classifiers in the AdaBoost framework. The results show that the performance of Naïve Bayes, Bayes network, and voted perceptron is highly improved compared to the rest of the base classifiers, attaining accuracies as high as 94.74%, 97.78%, and 97.78%, respectively. The results also show that in most cases, the base classifiers perform better with AdaBoost compared to their performance, i.e., for voted perceptron, the accuracy is improved up to 13.34%.For bagging, it is improved by up to 7%. This research aims to identify such base classifiers with optimal boosting capabilities within the AdaBoost framework for medical datasets. The significance of these results is that they provide insight into the performance of the base classifiers when used in the boosting framework to enhance the classification performance of classifiers in scenarios where individual classifiers do not perform up to the mark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call