Abstract

Many software reliability growth models (SRGMs) have developed in the past three decades to estimate software reliability measures such as the number of remaining faults and software reliability. The underlying common assumption of many existing models is that the operating environment and the developing environment are the same. This is often not the case in practice because the operating environments are usually unknown due to the uncertainty of environments in the field. In this paper, we develop a new software reliability model incorporating the uncertainty of system fault-detection rate per unit of time subject to operating environments. Examples are included to illustrate the goodness-of-fit of proposed model and several existing non-homogeneous Poisson process (NHPP) models based on a set of failure data collected from software applications. Three goodness-of-fit criteria, such as mean square error, predictive power and predictive-ratio risk, are used as an example to illustrate model comparisons. The results show that the proposed model fit significantly better than other existing NHPP models based on mean square error value. As we know, different criteria have different impact in measuring the software reliability and that no software reliability model is optimal for all contributing criteria. In this paper, we discuss a new method called, normalized criteria distance, for ranking and selecting the best model from among SRGMs based on a set of criteria taken all together. Example results show the proposed method offers a promising technique for selecting the best model based on a set of contributing criteria.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call