Abstract

AbstractReliability growth models are commonly categorized into two primary groups: parametric and non‐parametric models. Parametric models, known as Software Reliability Growth Models (SRGM) rely on a set of hypotheses that can potentially affect the accuracy of model predictions, while non‐parametric models (such as neural networks) can predict the model solely based on training data without any assumptions regarding the model itself. In this paper, we propose several methods to enhance prediction accuracy in software reliability context. More specifically, we, on one hand, introduce two gradient‐based techniques for estimating parameters of classical SRGMs. On the other, we propose methods involving LSTM Encoder–Decoder and Bayesian approximation within Langevin Gradient and Variational inference neural networks. To evaluate our proposed models' performance, we compare them with various neural network‐based software reliability models using three real‐world software failure datasets and utilizing the Mean Square Error (MSE) as a model comparison criterion. The experimental results indicate that our proposed non‐parametric models outperform most classical parametric and non‐parametric models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.