Abstract

Internal crowdsourcing in software engineering is a mechanism for recruiting engineers to carry out more efficiently software engineering tasks. However, engineers are busy resources and time is a valuable asset in industry, which hinders internal crowdsourcing in software engineering from becoming a widespread practice. In this work, we propose a low-cost variant of internal crowdsourcing for locating features in models, which limits the time that engineers can spend for providing knowledge. Our approach uses the knowledge provided by the internal crowd to automatically reformulate an initial feature description. The result is taken as input to automatically locate the relevant model fragment using Latent Semantic Indexing. We evaluate our approach using four query reformulation techniques in a real-world case study from our industrial partner. We compare the results of our approach in terms of recall, precision and F-measure with a baseline by means of statistical methods to show that the impact of the results of our approach is significant. Despite the limitation of time, the results show that low-cost in internal crowdsourcing improves significantly the results in an industrial context where engineers’ availability is scarce.

Highlights

  • Crowdsourcing in software engineering [1] uses an open call format to recruit software engineers to cooperate in carrying out various types of software engineering tasks such as requirements extraction, design, coding and testing

  • We applied the following criteria: (1) we did not take into consideration those techniques that rely on sources that are external to the corpus,; (2) we discarded the techniques that depend on the relationships between the words that are present in Natural Language (NL), due to the fact that these relationships are not shared between the words that appear in software [20]; and (3) since our goal is to support the daily Feature Location (FL) tasks of engineers, we disregarded non-practical techniques based on algorithms with elevated levels of computational complexity

  • The results show that all p − Values have statistically significant differences in the performance indicators since they are smaller than the corresponding significance threshold value (0.05)

Read more

Summary

Introduction

Crowdsourcing in software engineering [1] uses an open call format to recruit software engineers to cooperate in carrying out various types of software engineering tasks such as requirements extraction, design, coding and testing. Studies show that software engineers spend about 85% of the total effort in software maintenance and evolution [2]. Feature Location (FL), one of the most important tasks undertaken during software maintenance [3], is the process of finding the set of software artifacts that realize a specific functionality. To achieve FL, software engineers often use search engines that need a query as input, which describes the target feature in Natural Language (NL). The engineers generally lack the idea of the software artifacts that realize a target software feature in an

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call