Repairing code smells detected in the code or design of the system is one of the activities contributing to increasing the software quality. In this study, we investigate the impact of non-numerical information of software, such as project status information combined with machine learning techniques, on improving code smell detection. For this purpose, we constructed a dataset consisting of 22 systems with various project statuses, 12,040 classes, and 18 features that included 1935 large classes. A set of experiments was conducted with ten different machine learning techniques by dividing the dataset into training, validation, and testing sets to detect the large class code smell. Feature selection and data balancing techniques have been applied. The classifier’s performance was evaluated using six indicators: precision, recall, F-measure, MCC, ROC area, and Kappa tests. The preliminary experimental results reveal that feature selection and data balancing have poor influence on the accuracy of machine learning classifiers. Moreover, they vary their behavior when utilized in sets with different values for the selected project status information of their classes. The average value of classifiers performance when fed with status information is better than without. The Random Forest achieved the best behavior according to all performance indicators (100%) with status information, while AdaBoostM1 and SMO achieved the worst in most of them (> 86%). According to the findings of this study, providing machine learning techniques with project status information about the classes to be analyzed can improve the results of large class detection.
Read full abstract