Abstract

Recently we have witnessed a welcomed increase in the amount of empirical evaluation of Software Engineering methods and concepts. It is hoped that this increase will lead to establishing Software Engineering as a well-defined subject with a sound scientifically proven underpinning rather than a topic based upon unsubstantiated theories and personal belief. For this to happen the empirical work must be of the highest standard. Unfortunately producing meaningful empirical evaluations is a highly hazardous activity, full of uncertainties and often unseen difficulties. Any researcher can overlook or neglect a seemingly innocuous factor, which in fact invalidates all of the work. More serious is that large sections of the community can overlook essential experimental design guidelines, which bring into question the validity of much of the work undertaken to date. In this paper, the authors address one such factor — Statistical Power Analysis. It is believed, and will be demonstrated, that any body of research undertaken without considering statistical power as a fundamental design parameter is potentially fatally flawed. Unfortunately the authors are unaware of much Software Engineering research which takes this parameter into account. In addition to introducing Statistical Power, the paper will attempt to demonstrate the potential difficulties of applying it to the design of Software Engineering experiments and concludes with a discussion of what the authors believe is the most viable method of incorporating the evaluation of statistical power within the experimental design process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call