Abstract

An understanding of the distribution of software failure rates and its origin will strengthen the relation of software reliability engineering both to other aspects of software engineering and to the wider field of reliability engineering. The paper proposes that the distribution of failure rates for faults in software systems tends to be lognormal. Many successful analytical models of software behavior share assumptions that suggest that the distribution of software event rates will asymptotically approach lognormal. The lognormal distribution has its origin in the complexity, that is the depth of conditionals, of software systems and the fact that event rates are determined by an essentially multiplicative process. The central limit theorem links these properties to the lognormal: just as the normal distribution arises when summing many random terms, the lognormal distribution arises when the value of a variable is determined by the multiplication of many random factors. Because the distribution of event rates tends to be lognormal and faults are just a random subset or sample of the events, the distribution of the failure rates of the faults also tends to be lognormal. Failure rate distributions observed by other researchers in twelve repetitive-run experiments and nine sets of field failure data are analyzed and demonstrated to support the lognormal hypothesis. The ability of the lognormal to fit these empirical failure rate distributions is superior to that of the gamma distribution (the basis of the Gamma/EOS family of reliability growth models) or a Power-law model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call