Abstract

Abstract: In this research, we look at how class weight-tuning hyperparameters can be used to balance the relative weights of fraudulent and authentic transactions. We utilize Bayesian optimization to optimize hyperparameters while taking into account real-world issues like imbalanced data. We suggest weight-tuning as a pre-process for unbalanced data, as well as CatBoost and XGBoost, to improve the performance of the LightGBM method by addressing these real-world difficulties. Finally, we employ deep learning to adjust the hyperparameters (particularly our suggested weight-tuning technique) in order to increase overall performance. We do experiments on real-world data to validate the recommended approaches. In addition to the usual ROCAUC, we employ recall-precision measures to better address unbalanced datasets. CatBoost, LightGBM, and XGBoost are assessed independently using a 5-fold cross-validation methodology. Furthermore, the integrated algorithms' performance is evaluated using a majority voting ensemble learning approach. The findings indicate that LightGBM and XGBoost meet the optimum level condition of ROC-AUC = 0.95, precision 0.79, recall 0.80, F1 score 0.79, and MCC 0.79. Deep learning and Bayesian optimization are used to modify the hyperparameters, yielding ROC-AUC = 0.94, precision = 0.80, recall = 0.82, F1 score = 0.81, and MCC = 0.81. This represents a huge improvement over the cutting-edge techniques that we utilized for comparison.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call