Abstract

Practitioners and researchers of machine learning should have a deep understanding about the selection of the right performance metrics for classifier evaluation. Using a credit card fraud dataset, we demonstrate that the Area Under the Precision-Recall Curve (AUPRC) metric is a more reliable measurement, for the classification of highly imbalanced data, than the Area Under the Receiver Operating Characteristic Curve (AUC) metric. Furthermore, we establish that AUC is minimally impacted by the use of Random Undersampling (RUS). The classifiers used in this study are ensemble learners: LightGBM, CatBoost, Extremely Randomized Trees (ET), XGBoost, and Random Forest. Our results are governed by the fact that in a highly imbalanced dataset, the comparatively large number of true negative instances has an influence on AUC but not on AUPRC. Hence, AUPRC is able to accurately detect changes in the number of false positives because it ignores the true negatives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call