Abstract
Due to the increasing reliance on technology in nearly every industry over the past three decades, it has become necessary to evaluate the performance of a software product prior to its formal release in the market. The properties of a software application, such as its complexity and lines of code, are subject to change over time as a result of factors namely the testing environment, allocation of resources, testing efficiency, and testing team’s expertise. The assumption of constant Fault Detection Rate (FDR) may not accurately anticipate the potential number of bugs correctly. Keeping all these considerations in mind, a framework is developed to incorporate change point in the development of a testing effort-based Software Reliability Growth Model (SRGM) that takes into account the effect of application characteristics under both perfect and imperfect debugging settings. In addition, these outcomes are compared to the model without a change point. The proposed model is validated on two real-life software fault datasets. The results demonstrate that the proposed model performs better than the model without a change point.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have