Abstract

In crowdsourced testing, crowd workers from different places help developers conduct testing and submit test reports for the observed abnormal behaviors. Developers manually inspect each test report and make an initial decision for the potential bug. However, due to the poor quality, test reports are handled extremely slowly. Meanwhile, due to the limitation of resources, some test reports are not handled at all. Therefore, some researchers attempt to resolve the problem of test report prioritization and have proposed many methods. However, these methods do not consider the impact of duplicate test reports. In this paper, we focus on the problem of test report prioritization and present a new method named DivClass by combining a diversity strategy and a classification strategy. First, we leverage Natural Language Processing (NLP) techniques to preprocess crowdsourced test reports. Then, we build a similarity matrix by introducing an asymmetric similarity computation strategy. Finally, we combine the diversity strategy and the classification strategy to determine the inspection order of test reports. To validate the effectiveness of DivClass, experiments are conducted on five crowdsourced test report datasets. Experimental results show that DivClass achieves 0.8887 in terms of APFD (Average Percentage of Fault Detected) and improves the state-of-the-art technique DivRisk by 14.12% on average. The asymmetric similarity computation strategy can improve DivClass by 4.82% in terms of APFD on average. In addition, empirical results show that DivClass can greatly reduce the number of inspected test reports.

Highlights

  • These years have witnessed that mobile applications have become more and more important and powerful in our daily lives and work, such as transportation, shopping, and payment

  • Some studies focused on crowdsourced test report prioritization and proposed stateof-the-art techniques for this problem, such DivRisk [13] and Text&ImageDiv [5]

  • The results demonstrate that the classification strategy can effectively improve the effectiveness of test report prioritization

Read more

Summary

Introduction

These years have witnessed that mobile applications have become more and more important and powerful in our daily lives and work, such as transportation, shopping, and payment. Different from traditional software testing, crowdsourced testing is performed by a large number of online crowd workers (who may be not professionals) from different places [2]. It can effectively reduce test cost and test cycle, and improve test efficiency [3]. Differing from traditional software testing, crowdsouced testing recruits professional testers, and end users for testing [2]. These testers are geographically decentralized and called crowd workers.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call