Abstract

In order to improve software maintainability, possible improvement efforts must be made measurable. One such effort is refactoring the code which makes the code easier to read, understand and maintain. It is done by identifying the bad smell area in the code. This paper presents the results of an empirical study to develop a metrics model to identify the smelly classes. In addition, this metrics model is validated by identifying the smelly and error prone classes. The role of two new metrics (encapsulation and information hiding) is also investigated for identifying smelly and faulty classes in software code. This paper first presents a binary statistical analysis of the relationship between metrics and bad smells, the results of which show a significant relationship. Then, the metrics model (with significant metrics shortlisted from the binary analysis) for bad smell categorization (divided into five categories) is developed. To develop the model, three releases of the open source Mozila Firefox system are examined and the model is validated on one version of Mozila Sea Monkey, which has a strong industrial usage. The results show that metrics can predict smelly and faulty classes with high accuracy, but in the case of the categorized model, not all categories of bad smells can adequately be identified. Further, few categorised models can predict the faulty classes. Based on these results, we recommend more training for our model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call