Abstract

Competitive pressures are forcing many companies to aggressively pursue software quality improvement based on software complexity metrics. A metrics database is often the key to a successful ongoing software metrics program. Contel had a successful metrics program that involved project-level metrics databases and planned a corporate level database. The U.S. Army has established a minimum set of metrics for Army software development and maintenance covering the development process, software quality, and software complexity. This program involves a central Army-wide metrics database and a validation program. In light of the importance of corporate metrics databases and the prevalence of multicolliner metrics, we define the contribution of any proposed metric in terms of the measured variation, irrespective of the metric's usefulness in quality models. This is of interest when full validation is not practical. We review two approaches to assessing the contribution of a new software complexity metric to a metrics database and present a new method based on information theory. The method is general and does not presume any particular set of metrics. We illustrate this method with three case studies, using data from full-scale operational software systems. The new method is less subjective than competing assessment methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call