Abstract

This study investigates the usefulness of the Synthetic Minority Oversampling Technique (SMOTE) in conjunction with convolutional neural network (CNN) models, which include both single and ensemble classifiers. The objective of this research is to handle the difficulty of multi-class imbalanced image classification. The application of SMOTE in imbalanced picture datasets is still underexplored, even though CNNs have been shown to be successful in image classification and that ensemble learning approaches have improved their performance. To investigate whether or not SMOTE can increase classification accuracy and other performance measures when combined with CNN-based classifiers, our research makes use of a CIFAR-10 dataset that has been artificially step-imbalanced and has varying imbalanced ratios. We conducted experiments using five distinct models, namely AdaBoost, XGBoost, standalone CNN, CNN-AdaBoost, and CNN-XGBoost, on datasets that were either imbalanced or SMOTE-balanced. Metrics such as accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC) were included in the evaluation process. The findings indicate that SMOTE dramatically improves the accuracy of minority classes, and that the combination of ensemble classifiers with CNNs and oversampling techniques significantly improves overall classification performance, particularly in situations when there is a high-class imbalance. When it comes to enhancing imbalanced classification tasks, this study demonstrates the potential of merging oversampling techniques with CNN-based ensemble classifiers to minimize the impacts of class imbalance in picture datasets. This suggests a promising direction for future research in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call