Articles published on Imperfect debugging
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
161 Search results
Sort by Recency
- Research Article
- 10.1038/s41598-025-31258-w
- Dec 12, 2025
- Scientific reports
- Neelam Sharma + 5 more
Many software reliability growth models (SRGMs) have been proposed by researchers within the context of probability theory to estimate software reliability, remaining number of faults and optimal release time. The Fault Detection Rate (FDR) may vary because of changes in testing strategies. Due to lack of knowledge of software code, the testing team might be unable to rectify the detected faults thereby introducing new faults during the fault correction process. The debugging process is imperfect due to factors like human error, insufficient testing and complex codes resulting in epistemic uncertainty. In this paper, we have proposed a new software belief reliability growth model (SBRGM) using uncertain differential equations to deal with epistemic uncertainty effectively. We have incorporated imperfect debugging and change point based on the approach of belief reliability theory, making this model more accurate as compared to some of the previously developed models. Model parameters estimation methodology is derived using the least square method and Python version 3.10. Calculation of change point is done using empirical data analysis based on the First principle of Derivatives. Three real data sets have been used to validate the proposed model. This research contributes to being more flexible and realistic in dealing with epistemic uncertainty effectively as compared to conventional approaches.
- Research Article
- 10.1142/s0218194025500585
- Oct 13, 2025
- International Journal of Software Engineering and Knowledge Engineering
- Gwo-Liang Liao + 2 more
This study aims to develop two generalized software reliability growth models (SRGMs) based on the non-homogeneous Poisson process (NHPP), considering the potential for imperfect debugging. The proposed models integrate both exponential and linear characteristics into the fault content function while accounting for varying fault detection rates. Additionally, numerical examples are provided to validate the effectiveness of the proposed models and compare their performance with existing ones. The results indicate that the proposed models offer more accurate prediction capabilities.
- Research Article
- 10.3389/fams.2025.1669066
- Sep 3, 2025
- Frontiers in Applied Mathematics and Statistics
- Kaushal Kumar + 3 more
Software reliability analysis is vital for evaluating software quality, where reliability is the probability of failure operation of a system for a specified duration. Numerous SRGMs have been proposed, mainly based on the NHPP to enhance the reliability of software product. A key aspect of software reliability modeling involves the FDP and FCP, both of which are vital for understanding and predicting software performance. These models have evolved to consider dependencies between FD and FC, time delay effects, and testing effort consumption, thereby refining predictions and providing robust reliability estimates. In this paper, we first provide a comprehensive review of the last four decades of research on software reliability modeling, focusing on methods proposed for predicting software reliability through FDP and FCP. We then present the FDP and FCP for imperfect debugging considering BTXTEF. Two specific paired FDP and FCP models are proposed with BTXTEF. The proposed SRGM with BTXTEF contains some undetermined parameters. We use PSO to optimize these parameters on an actual dataset rather than using traditional estimation methods. We compare the performance of the proposed SRGM model, in relation to other existing models from the literature. The results reveal that the proposed SRGM with BTXTEF for FDP and FCP is highly effective and outperforms existing models.
- Research Article
- 10.1002/smr.70037
- Jul 1, 2025
- Journal of Software: Evolution and Process
- Anup Kumar Behera + 1 more
ABSTRACTIn today's swiftly evolving technological landscape, the importance of software reliability has become crucial. To evaluate software reliability, many researchers have investigated several software reliability growth models (SRGMs). Software developers frequently use a controlled environment for software testing, where they are aware of all the factors. However, the operational environment can introduce unpredictable and unfamiliar factors. Many studies in the literature have recognized the existence of uncertainty in the operational environment with different scenarios like perfect and imperfect debugging, several testing coverage functions, different error detection rates, etc. However, the inclusion of the testing effort function (TEF) alongside this operating uncertain environment has received notably less attention. This paper addresses this gap by exploring a software reliability growth model that integrates a power law TEF to account for an operational uncertain environment. For the validation, a numerical analysis is done based on two datasets (DS1 and DS2), and the proposed model is compared to seven existing reliability models using six goodness‐of‐fit criteria, and other improved NCD ranking criteria. In addition, we have also conducted single and multiple‐parameter sensitivity analysis, which has enabled us to identify the critical parameters. The proposed models could potentially assist system analysts in predicting various parameters related to certain software systems. The findings encourage the decision makers.
- Research Article
- 10.1007/s11219-025-09718-3
- Apr 4, 2025
- Software Quality Journal
- Nageswari N + 2 more
Predictive framework of software reliability analysis under multiple change points and imperfect debugging
- Research Article
1
- 10.1007/s13198-024-02671-7
- Feb 6, 2025
- International Journal of System Assurance Engineering and Management
- Shikha Dwivedi + 1 more
Fault prediction of multi-version software considering imperfect debugging and severity
- Research Article
5
- 10.1002/qre.3716
- Dec 29, 2024
- Quality and Reliability Engineering International
- Umashankar Samal
ABSTRACTSoftware reliability is a critical metric for determining the readiness and quality of software products before release. In this study, a software reliability growth model (SRGM) is proposed that integrates imperfect debugging and fault removal efficiency to better reflect real‐world testing environments. The model accounts for the possibility of introducing new faults during debugging and adjusts the fault removal process based on the testing team's efficiency and learning progression of the testing team. Furthermore, an optimal software release time is derived by considering various costs, including testing, debugging, warranty, and reputation, with the required reliability levels. The proposed model is validated using two widely adopted datasets, demonstrating consistently superior performance compared to several existing models. This model offers a valuable tool for researchers in software reliability, providing a more accurate representation of debugging processes, and facilitating more informed decisions regarding optimal software release timing.
- Research Article
2
- 10.1142/s0218539324500177
- Jun 17, 2024
- International Journal of Reliability, Quality and Safety Engineering
- Rabia Nazir + 3 more
In this paper, we introduce an innovative Software Reliability Growth Model (SRGM) designed to tackle the pivotal challenges associated with software reliability in the contemporary digital landscape, where the prevalence of online systems is ubiquitous. This SRGM integrates Imperfect Debugging (ID), Testing Coverage (TC), Testing Effort (TE), and error generation into a cohesive framework. Employing a sigmoid function to encapsulate TE, it incorporates three distinct TC functions: Delayed S-shaped, Exponential, and Logistic. This model relies on foundational assumptions, including the proportionality of fault detection rates to remaining faults, the introduction of new faults during debugging, and the intricate connection between fault detection and code coverage. The Mean Value Function (MVF) is computed through these differential equations, and the resultant MVFs are systematically tabulated for all models. An examination of the sigmoid TE function and the Weibull TE function across diverse datasets, utilizing a range of goodness-of-fit criteria including Mean Square Error (MSE), Pham’s Criterion (PC), Predictive Risk Ratio (PRR), Bayesian Information Criterion (BIC), and Akaike’s Information Criterion (AIC), reveals the superior performance of the sigmoid TE function over the Weibull counterpart across various datasets and evaluation criteria. In conclusion, this paper introduces a groundbreaking SRGM that seamlessly integrates ID, TC, and TE, offering valuable insights for assessing software reliability in the dynamic landscape of modern digital systems.
- Research Article
- 10.1142/s0218539324500116
- May 17, 2024
- International Journal of Reliability, Quality and Safety Engineering
- Rajat Arora + 3 more
Due to the increasing reliance on technology in nearly every industry over the past three decades, it has become necessary to evaluate the performance of a software product prior to its formal release in the market. The properties of a software application, such as its complexity and lines of code, are subject to change over time as a result of factors namely the testing environment, allocation of resources, testing efficiency, and testing team’s expertise. The assumption of constant Fault Detection Rate (FDR) may not accurately anticipate the potential number of bugs correctly. Keeping all these considerations in mind, a framework is developed to incorporate change point in the development of a testing effort-based Software Reliability Growth Model (SRGM) that takes into account the effect of application characteristics under both perfect and imperfect debugging settings. In addition, these outcomes are compared to the model without a change point. The proposed model is validated on two real-life software fault datasets. The results demonstrate that the proposed model performs better than the model without a change point.
- Research Article
- 10.1504/ijrs.2024.139202
- Jan 1, 2024
- International Journal of Reliability and Safety
- Asheesh Tiwari + 1 more
Growth model for detection and removal of faults having different severity with single change point and imperfect debugging
- Research Article
3
- 10.5267/j.ijiec.2023.11.001
- Jan 1, 2024
- International Journal of Industrial Engineering Computations
- Chun-Wu Yeh + 1 more
This research delves into the software testing process and its environmental factors to uncover the core elements influencing software reliability. Specifically, it focuses on the learning and negligent aspects of the software reliability growth model. The learning factor accelerates reliability growth, leading to an S-shaped curve in the mean value function, while the negligent factor highlights the occurrence of imperfect debugging. The study also uses Brownian motion and stochastic differential equations to establish statistical confidence intervals for reliability and costs. These intervals aid software managers in assessing potential release risks at various confidence levels, allowing them to make informed decisions considering resource constraints and desired system reliability levels across different scenarios.
- Research Article
1
- 10.1142/s021812662450110x
- Oct 16, 2023
- Journal of Circuits, Systems and Computers
- Asheesh Tiwari + 1 more
In the current technological era, the reliability growth model depicts the failure in software that is employed for estimating the reliability of software. Conventional software reliability growth model (SRGMs) usually presume that faults in software are immediately corrected once a software fault is found and no new faults are introduced. This consideration can be impractical. However, introduction of several new faults is possible throughout debugging, which is called imperfect debugging. During such a process of debugging and introduction of new faults, consumption of testing effort also plays an important role. Hence, in this paper, two models are proposed that incorporate testing effort along with imperfect debugging during fault correction. Additionally, the combined testing effort is the addition of two efforts, specifically detection effort as well as correction effort. The formulated models are validated for real data set and contrasted with additional famous SRGMs. These inferences indicate that Model 2 performs best in terms of fitting and prediction capability.
- Research Article
4
- 10.1287/ijoc.2021.0141
- Aug 24, 2023
- INFORMS Journal on Computing
- Yeu-Shiang Huang + 3 more
Research on software reliability growth models (SRGMs) has been extensively conducted for decades, and the models were often developed based on two assumptions: (1) once the errors are detected, they can be completely removed instantly, and (2) errors can be removed eternally, and the debugging tasks will not produce any new errors. However, both assumptions are unrealistic. This study proposes an SRGM that ignores these restricted assumptions by introducing a detection process that may remove an error after a period of time once it has been detected and by considering imperfect debugging, which indicates that new errors may emerge through corresponding debugging tasks. In addition, because software can be upgraded to respond on a timely basis to constantly changing consumer expectations and thus extend product life in the market, the proposed SRGM also considers software upgrades for the multiversion software, and a dynamic programming approach is used to effectively obtain the optimal release schedule with consideration of the constraint of budget. Real data sets are used to examine the effectiveness of the proposed model, and the fitting results show that the proposed model outperforms other existing models. The results of numerical validation indicate that the proposed dynamic programming method with information updating outperforms the sequential solution method in determining the optimal release time for each version. Moreover, decision makers should carefully evaluate the parameters because overestimating the parameters of the mean value functions will cause serious software risk due to excessively shortening the testing time. History: Accepted by Pascal Van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2021.0141 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2021.0141 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .
- Research Article
12
- 10.1016/j.mex.2023.102076
- Jan 1, 2023
- MethodsX
- Ritu Bibyan + 3 more
Multi-release software model based on testing coverage incorporating random effect (SDE)
- Research Article
- 10.1504/ijrs.2023.10061631
- Jan 1, 2023
- International Journal of Reliability and Safety
- Asheesh Tiwari + 1 more
Growth model for detection and removal of faults having different severity with single change point and imperfect debugging
- Research Article
- 10.1504/ijor.2023.10061339
- Jan 1, 2023
- International Journal of Operational Research
- Abhishek Tandon + 2 more
Impact of Slippage Cost and Risk Cost on Software Development under Imperfect Debugging Environment
- Research Article
3
- 10.3390/app122110736
- Oct 23, 2022
- Applied Sciences
- Ce Zhang + 5 more
From the perspective of FDR (fault detection rate), which is an indispensable component in reliability modeling, this paper proposes two kinds of reliability models under imperfect debugging. This model is a relatively flexible and unified software reliability growth model. First, this paper examines the incomplete phenomenon of debugging and fault repair and established a unified imperfect debugging framework model related to FDR, which is called imperfect debugging type I. Furthermore, it considers the introduction of new faults during debugging and establishes a unified imperfect debugging framework model that supports multiple FDRs, called imperfect debugging type II. Finally, a series of specific reliability models are derived by integrating multiple specific FDRs into two types of imperfect debugging framework models. Based on the analysis of the two kinds of imperfect debugging models on multiple public failure data sets, and the analysis of model performance differences from the perspective of fitting metrics and prediction research, a fault detection rate function that can better describe the fault detection process is found. By incorporating this fault detection rate function into the two types of imperfect debugging models, a more accurate model is obtained, which not only has excellent performance and is superior to other models but also describes the real testing process more accurately and will guide software testers to quantitatively improve software reliability.
- Research Article
- 10.5815/ijitcs.2022.03.01
- Jun 8, 2022
- International Journal of Information Technology and Computer Science
- Islam S Ramadan + 3 more
Nowadays, computer software plays a significant role in all fields of our life. Essentially open-source software provides economic benefits for software companies such that it allows building new software without the need to create it from scratch. Therefore, it is extremely used, and accordingly, open-source software’s quality is a critical issue and one of the top research directions in the literature. In the development cycles of the software, checking the software reliability is an important indicator to release software or not. The deterministic and probabilistic models are the two main categories of models used to assess software reliability. In this paper, we perform a comparative study between eight different software reliability models: two deterministic models, and six probabilistic models based on three different methodologies: perfect debugging, imperfect debugging, and Gompertz distribution. We evaluate the employed models using three versions of a standard open-source dataset which is GNU’s Not Unix Network Object Model Environment projects. We evaluate the employed models using four evaluation criteria: sum of square error, mean square error, R-square, and reliability. The experimental results showed that for the first version of the open-source dataset SRGM-4 based on imperfect debugging methodology achieved the best reliability result, and for the last two versions of the open-source dataset SRGM-6 based on Gompertz distribution methodology achieved the best reliability result in terms of sum of square error, mean square error, and R-square.
- Research Article
12
- 10.1109/tr.2022.3158336
- Jun 1, 2022
- IEEE Transactions on Reliability
- Zhe Liu + 1 more
Due to the increased dependency of the modern system on software-based system, software reliability has become the primary concern during the software development. To track and measure the software reliability, various software reliability growth models under the framework of probability theory have been proposed. Note that software failures involve lots of epistemic uncertainty, which cannot be depicted well by the probability theory, and debugging processes are usually imperfect due to the complexity and incomplete understanding of software systems. This article deduces an imperfect debugging software belief reliability growth model using the uncertain differential equation under the framework of uncertainty theory, and investigates properties of essential software belief reliability metrics, namely belief reliability, belief reliable time, and mean time between failures based on the belief reliability theory. Estimations for unknown parameters in this model are derived. Real data analyses validate our model and show that it performs better than previous models from the perspective of the sum of square error. A theoretical analysis for these results is presented.
- Research Article
6
- 10.3390/math10101689
- May 15, 2022
- Mathematics
- Qing Tian + 2 more
In this study, an imperfect debugging software reliability growth model (SRGM) with Bayesian analysis was proposed to determine an optimal software release in order to minimize software testing costs and also enhance the practicability. Generally, it is not easy to estimate the model parameters by applying MLE (maximum likelihood estimation) or LSE (least squares estimation) with insufficient historical data. Therefore, in the situation of insufficient data, the proposed Bayesian method can adopt domain experts’ prior judgments and utilize few software testing data to forecast the reliability and the cost to proceed with the prior analysis and the posterior analysis. Moreover, the debugging efficiency involves testing staff’s learning and negligent factors, and therefore, the human factors and the nature of debugging process are taken into consideration in developing the fundamental model. Based on this, the estimation of the model’s parameters would be more intuitive and can be easily evaluated by domain experts, which is the major advantage for extending the related applications in practice. Finally, numerical examples and sensitivity analyses are performed to provide managerial insights and useful directions for software release strategies.