Abstract

Software defect prediction is an active research area in software engineering. Accurate prediction of software defects assists software engineers in guiding software quality assurance activities. In machine learning, ensemble learning has been proven to improve the prediction performance over individual machine learning models. Recently, many Tree-based ensembles have been proposed in the literature, and their prediction capabilities were not investigated in defect prediction. In this paper, we will empirically investigate the prediction performance of seven Tree-based ensembles in defect prediction. Two ensembles are classified as bagging ensembles: Random Forest and Extra Trees, while the other five ensembles are boosting ensembles: Ada boost, Gradient Boosting, Hist Gradient Boosting, XGBoost and CatBoost. The study utilized 11 publicly available MDP NASA software defect datasets. Empirical results indicate the superiority of Tree-based bagging ensembles: Random Forest and Extra Trees ensembles over other Tree-based boosting ensembles. However, none of the investigated Tree-based ensembles was significantly lower than individual decision trees in prediction performance. Finally, Adaboost ensemble was the worst performing ensemble among all Tree-based ensembles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call