Software development is a highly unpredictable process, and ensuring software quality and reliability before releasing it to the market is crucial. One of the common practices during software development is the reuse of code. It can be achieved by utilizing libraries, frameworks, and other reusable components. Practically, when a fault is detected in replicated code, developers must check for similar faults in other copies, as there is a dependency between faults. To prevent recurrence of observed failures, developers must remove the corresponding leading fault and any related dependent faults. Many software reliability growth models (SRGMs) have been proposed and studied in the past, but most SRGMs assume that developers usually detect only one fault causing a failure. In actuality, it is necessary to consider the possibility of detecting multiple faults that may share similarities or dependencies. Additionally, some SRGMs rely on specific assumptions that may not always be valid, such as perfect debugging and/or immediate debugging. In this study, the modified diffusion models are proposed to handle these unrealistic situations, and are expected to better capture the dynamics of open source software (OSS) development. Experiments using real OSS data show that the proposed models can accurately describe the fault correction process of OSS. Finally, an optimal software release policy is proposed and studied. This policy takes into account some factors, including the remaining number of faults in the software, the expenses associated with identifying and rectifying those faults, and the level of market demand for the software. By considering these factors, developers can determine the optimal time to release the software to the market.
Read full abstract