Abstract

Online feature request management systems are popular tools for gathering stakeholders’ change requests during system evolution. Deciding which feature requests require attention and how much upfront analysis to perform on them is an important problem in this context: too little upfront analysis may result in inadequate functionalities being developed, costly changes, and wasted development effort; too much upfront analysis is a waste of time and resources. Early predictions about which feature requests are most likely to fail due to insufficient or inadequate upfront analysis could facilitate such decisions. Our objective is to study whether it is possible to make such predictions automatically from the characteristics of the online discussions on feature requests. This paper presents a study of feature request failures in seven large projects, an automated tool-implemented framework for constructing failure prediction models, and a comparison of the performance of the different prediction techniques for these projects. The comparison relies on a cost-benefit model for assessing the value of additional upfront analysis. In this model, the value of additional upfront analysis depends on its probability of success in preventing failures and on the relative cost of the failures it prevents compared to its own cost. We show that for reasonable estimations of these two parameters, automated prediction models provide more value than a set of baselines for many failure types and projects. This suggests automated failure prediction during requirements elicitation to be a promising approach for guiding requirements engineering efforts in online settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call