Abstract

Several strategies have supported test quality measurement and analysis. For example, code coverage, a widely used one, enables verification of the test case to cover as many source code branches as possible. Another set of affordable strategies to evaluate the test code quality exists, such as test smells analysis. Test smells are poor design choices in test code implementation, and their occurrence might reduce the test suite quality. A practical and largescale test smells identification depends on automated tool support. Otherwise, test smells analysis could become a cost-ineffective strategy. In an earlier study, we proposed the JNose Test, automated tool support to detect test smells and analyze test suite quality from the test smells perspective. This study extends the previous one in two directions: i) we implemented the JNose-Core, an API encompassing the test smells detection rules. Through an extensible architecture, the tool is now capable of accomodating new detection rules or programming languages; and ii) we performed an empirical study to evaluate the JNose Test effectiveness and compare it against the state-of-the-art tool, the tsDetect. Results showed that the JNose-Core precision score ranges from 91% to 100%, and the recall score from 89% to 100%. It also presented a slight improvement in the test smells detection rules compared to the tsDetect for the test smells detection at the class level.

Highlights

  • Ensuring end-user satisfaction, detecting software defects before go-live, and increasing software or product quality is among the most commonly reported software testing objectives, as written by the annual report of a global consulting firm (Capgemini, 2018)

  • It provides a flexible architecture to support the insertion of new test smells detection rules

  • The results obtained with the tsDetect diverges from those reported by Peruma et al (2020)

Read more

Summary

Introduction

Ensuring end-user satisfaction, detecting software defects before go-live, and increasing software or product quality is among the most commonly reported software testing objectives, as written by the annual report of a global consulting firm (Capgemini, 2018). Published reports estimate over $ 2 trillion to quantify the impact of poor software quality on the United States economy, referencing publicly available source material for the year 2020 (CISQ, 2021) Such data illustrates the need for employing software testing techniques in software development processes, as they could anticipate bug identification and fixing, reducing its likely effects still during implementation (or even when existing functionalities are under evolution) (Palomba et al, 2018; Spadini et al, 2018; Grano et al, 2019). In real-world practice, developers are likely to use anti-patterns during test development (Bavota et al, 2012; Junior et al, 2020) Those anti-patterns may negatively impact the test code quality and maintenance and reduce its capability for detecting software faults (Bell et al, 2018; Spadini et al, 2020). It can be difficult to identify which one failed;

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call