Abstract

Crowdsourced testing has attracted the attention of both academia and industry. In crowdsourced testing, workers will submit many test reports to the crowdsourced testing platform. These submitted test reports usually provide critical information for understanding and reproducing the bugs. The high-quality bug report can provide more complete bug reproduction steps to quickly locate and identify the bug. Conversely, the low-quality bug report may affect inspection progress. To predict whether a test report should be selected for inspection within limited resources, we propose a new framework named CTRQS to automatically model the quality of crowdsourced test reports. We summarize the desirable properties and measurable quality indicators of crowdsourced test reports and innovatively propose analytical indicators based on dependency parsing to better determine the quality of crowd sourced test reports. We use rules to achieve quality indicators. Experiments conducted over five crowdsourced test report datasets of mobile applications show that CTRQS can effectively judge the quality problems in test reports and correctly predict the quality of test reports with an accuracy of up to 88%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call