Dealing with imbalanced data is crucial and challenging when developing effective machine-learning models for data classification purposes. It significantly impacts the classification model's performance without proper data management, leading to suboptimal results. Many methods for managing imbalanced data have been studied and developed to improve data balance. In this paper, we conduct a comparative study to assess the influence of a ranking technique on the evaluation of the effectiveness of 66 traditional methods for addressing imbalanced data. The three classification models, i.e., Decision Tree, Random Forest, and XGBoost, act as classification models. The experimental settings have been divided into two segments. The first part evaluates the performance of various imbalanced dataset handling methods, while the second part compares the performance of the top 4 oversampling methods. The study encompasses 50 separate datasets: 20 retrieved from the UCI repository and 30 sourced from the OpenML repository. The evaluation is based on F-Measure and statistical methods, including the Kruskal-Wallis test and Borda Count, to rank the data imbalance handling capabilities of the 66 methods. The SMOTE technique is the benchmark for comparison due to its popularity in handling imbalanced data. Based on the experimental results, the MCT, Polynom-fit-SMOTE, and CBSO methods were identified as the top three performers, demonstrating superior effectiveness in managing imbalanced datasets. This research could be beneficial and serve as a practical guide for practitioners to apply suitable techniques for data management.