Abstract

There are a number of views as to what software reliability is and how it should be quantified. Some people believe that this measure should be binary in nature so that an imperfect program would have zero reliability while a perfect one would have a reliability value of one. This view parallels that of program proving whereby the program is either correct or incorrect. Others, however, feel that software reliability should be defined as the relative frequency of the times that the program works as intended by the user. This view is similar to that taken in testing where a percentage of the successful ewes is used as a measure of program quality. According to the latter viewpoint, software reliability is a probabilistic measure and can be defined as follows: Let F be a class of faults, defined arbitrarily, and T be a measure of relevant time, the units of which are dictated by the application at hand. Then the reliability of the software package with respect to the class of faults F and with respect to the metric T, is the probability that no fault of the class occurs during the execution of the program for a prespecified period of relevant time. A number of models have been proposed during the past fifteen years to estimate software reliability and several other performance measures. These are based mainly on the failure history of software and can be classified according to the nature of the failure process studied as indicated below. Times Between Failures Models: In this class of models the process under study is the time between failures. The most common approach is to assume that the time between, say, the (i-1)st and ith failures, follows a distribution whose parameters depend on the number of faults remaining in the program during this interval. Failure Count Models: The interest of this class of models is in the number of faults or failures in specified time intervals rather than in times between failures. The failure counts are assumed to follow a known stochastic process with a time dependent discrete or continuous failure rate. Fault Seeding Models: The basic approach in this class of models is to “seed” a known number of faults in a program which is assumed to have an unknown number of indigenous faults. Input Domain Based Models: The basic approach taken here is to generate a set of test cases from an input distribution which is assumed to be representative of the operational usage of the program. Because of the difficulty in obtaining this distribution, the input domain is partitioned into a set of equivalence classes, each of which is usually associated with a program path. In this tutorial we discuss the key models from the above classes and the related issues of parametric estimation, unification of models, Bayesian interpretation, validation and comparison of models, and determination of optimum release time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.