Abstract


 
 
 
 In order to customize their behavior at runtime, a wide sort of modern frameworks do use code annotations at the applications‟ classes as metadata configuration. However, despite its popularity, this type of metadata definition inserts complexity and semantic coupling that is ignored by traditional software metrics. This paper presents identified bad smells in annotated code and defines new metrics that help in their detection by enabling a quantitative assessment of complexity and coupling in this type of code. Moreover, it proposes some strategies to detect those bad smells by using the defined metrics and introduces an open-source tool created to automate the process of bad smell discovery on annotated code.
 
 
 

Highlights

  • Attribute-oriented programming is a program-level marking technique used to mark program elements, such as classes, methods and attributes, to indicate that they maintain application-specific or domain-specific semantics

  • Since the authors of this paper developed a tool that automates the calculation of these metrics, which enables the evaluation of them in different projects, a future work is a statistical analysis aiming to correct and validate the values of the thresholds used here

  • In this paper the authors presented identified bad smells on annotated code, proposed definitions of metrics that help on their detection, created detection strategies that use the defined metrics to detect those bad smells, detailed refactoring mechanisms to eliminate them and introduced an open-source tool developed to automate the process of bad smell discovery in annotated code

Read more

Summary

INTRODUCTION

Attribute-oriented programming is a program-level marking technique used to mark program elements, such as classes, methods and attributes, to indicate that they maintain application-specific or domain-specific semantics It is a type of metadata definition that is placed in the source code of the application‟s classes to be consumed by a metadata-based framework [1] at runtime, or statically by an annotation processor. Code readability clearly is closely related to its maintainability, and is a critical factor in overall software quality [3] Despite these serious consequences of misusing metadata in applications, the current metrics about complexity and coupling do not consider the metadata definition in any measurement, giving rise to open discussion to the applicability of traditional metrics to evaluate complexity and coupling in the usage of metadata-based frameworks [2]. It proposes some strategies to detect those bad smells by using the defined metrics and introduces an open-source tool created to automate the process of bad smell discovery on annotated code

USING METRICS TO DETECT BAD SMELLS
Thresholds
Detection Strategy
ANNOTATIONS’ BAD SMELLS
ANNOTATIONS’ METRICS
DETECTING ANNOTATION BAD SMELLS
ANNOTATION REFACTORINGS
ANNOTATION SNIFFER
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call