Abstract

This study explores the effects of class label noise on detecting fraud within three highly imbalanced healthcare fraud data sets containing millions of claims and minority class sizes as small as 0.1%. For each data set, 29 noise distributions are simulated by varying the level of class noise and the distribution of noise between the fraudulent and non-fraudulent classes. Four popular machine learning algorithms are evaluated on each noise distribution using six rounds of five-fold cross-validation. Performance is measured using the area under the precision-recall curve (AUPRC), true positive rate (TPR), and true negative rate (TNR) in order to understand the effect of the noise level, noise distribution, and their interactions. AUPRC results show that negative class noise, i.e. fraudulent samples incorrectly labeled as non-fraudulent, is the most detrimental to model performance. TPR and TNR results show that there are significant trade-offs in class-wise performance as noise transitions between the positive and the negative class. Finally, results reveal how overfitting negatively impacts the classification performance of some learners, and how simple regularization can be used to combat this overfitting and improve classification performance across all noise distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call