Abstract

AbstractCode smell is a software characteristic that indicates bad symptoms in code design which causes problems related to software quality. The severity of code smells must be measured because it will help the developers when determining the priority of refactoring efforts. Recently, several studies focused on the prediction of design patterns errors using different detection tools. Nowadays, there is a lack of empirical studies regarding how to measure severity of code smells and which learning model is best to detect the severity of code smells. To overcome such gap, this paper focuses on measuring the severity classification of code smells depending on several machine learning models such as regression models, multinominal models, and ordinal classification models. The Local Interpretable Model Agnostic Explanations (LIME) algorithm was further used to explain the machine learning model's predictions and interpretability. On the other side, we extract the prediction rules generated by the Projective Adaptive Resonance Theory (PART) algorithm in order to study the effectiveness of using software metrics to predict code smells. The results of the experiments have shown that the accuracy of severity classification model is enhanced than baseline and ranking correlation between the predicted and actual model reaches 0.92–0.97 by using Spearman's correlation measure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call