Abstract

The Synthetic Minority Oversampling TEchnique (SMOTE) is widely-used for the analysis of imbalanced datasets. It is known that SMOTE frequently over-generalizes the minority class, leading to misclassifications for the majority class, and effecting the overall balance of the model. In this article, we present an approach that overcomes this limitation of SMOTE, employing Localized Random Affine Shadowsampling (LoRAS) to oversample from an approximated data manifold of the minority class. We benchmarked our algorithm with 14 publicly available imbalanced datasets using three different Machine Learning (ML) algorithms and compared the performance of LoRAS, SMOTE and several SMOTE extensions that share the concept of using convex combinations of minority class data points for oversampling with LoRAS. We observed that LoRAS, on average generates better ML models in terms of F1-Score and Balanced accuracy. Another key observation is that while most of the extensions of SMOTE we have tested, improve the F1-Score with respect to SMOTE on an average, they compromise on the Balanced accuracy of a classification model. LoRAS on the contrary, improves both F1 Score and the Balanced accuracy thus produces better classification models. Moreover, to explain the success of the algorithm, we have constructed a mathematical framework to prove that LoRAS oversampling technique provides a better estimate for the mean of the underlying local data distribution of the minority class data space.

Highlights

  • Imbalanced datasets are frequent occurrences in a large spectrum of fields, where Machine Learning (ML) has found its applications, including business, finance and banking as well as bio-medical science

  • We select five datasets with highest number of features among our tested datasets and present the performances of the selected ML methods in Table 5 From our results for high dimensional datasets, we observe that Localized Randomized Affine Shadowsampling (LoRAS) produces the best F1-Score and second best Balanced accuracy on average among all oversampling models as Borderline-2 Synthetic Minority Oversampling TEchnique (SMOTE) beats LoRAS marginally

  • From our study we infer that for tabular high dimensional and highly imbalanced datasets our LoRAS oversampling approach can better estimate the mean of the underlying local distribution for a minority class sample and can improve Balanced accuracy and F1-Score of ML classification models

Read more

Summary

Introduction

Imbalanced datasets are frequent occurrences in a large spectrum of fields, where Machine Learning (ML) has found its applications, including business, finance and banking as well as bio-medical science. Oversampling approaches are a popular choice to deal with imbalanced datasets (Chawla et al 2002; Han et al 2005; Haibo et al 2008; Bunkhumpornpat et al 2009; Barua et al 2014). We here present Localized Randomized Affine Shadowsampling (LoRAS), which produces better ML models for imbalanced datasets, compared to state-of-the art oversampling techniques such as SMOTE and several of its extensions. We validated the approach with 12 publicly available imbalanced datasets, comparing the performances of several state-of-the-art convex-combination based oversampling techniques with LoRAS. The average performance of LoRAS on all these datasets is better than other oversampling techniques that we investigated. We have constructed a mathematical framework to prove that LoRAS is a more effective oversampling technique since it provides a better estimate for local mean of the underlying data distribution, in some neighbourhood of the minority class data space

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call