BACKGROUND: Software quality prediction models play a crucial role in identifying vulnerable software components during early stages of development, and thereby optimizing the resource allocation and enhancing the overall software quality. While various classification algorithms have been employed for developing these prediction models, most studies have relied on default hyperparameter settings, leading to significant variability in model performance. Tuning the hyperparameters of classification algorithms can enhance the predictive capability of quality models by identifying optimal settings for improved accuracy and effectiveness. METHOD: This systematic review examines studies that have utilized hyperparameter tuning techniques to develop prediction models in software quality domain. The review focused on diverse areas such as defect prediction, maintenance estimation, change impact prediction, reliability prediction, and effort estimation, as these domains demonstrate the wide applicability of common learning algorithms. RESULTS: This review identified 31 primary studies on hyperparameter tuning for software quality prediction models. The results demonstrate that tuning the parameters of classification algorithms enhances the performance of prediction models. Additionally, the study found that certain classification algorithms exhibit high sensitivity to their parameter settings, achieving optimal performance when tuned appropriately. Conversely, certain classification algorithms exhibit low sensitivity to their parameter settings, making tuning unnecessary in such instances. CONCLUSION: Based on the findings of this review, the study conclude that the predictive capability of software quality prediction models can be significantly improved by tuning their hyperparameters. To facilitate effective hyperparameter tuning, we provide practical guidelines derived from the insights obtained through this study.
Read full abstract