Abstract

The results of empirical studies in Software Engineering are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining and replicating existing studies and using power analyses for an accurate minimum sample size. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of the verification and validation (V&V) of concurrent Java components. The paper presents the results of this controlled experiment and shows that the combination of automated static analysis and code inspection is cost-effective. Throughout the experiment a strategy to maximise the information gained from the experiment was used. As a result, despite the size of the study, conclusive results were obtained, contributing to the research on V&V technology evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call