Abstract
Crowdsourced testing has attracted the attention of both academia and industry. In crowdsourced testing, workers will submit many test reports to the crowdsourced testing platform. These submitted test reports usually provide critical information for understanding and reproducing the bugs. The high-quality bug report can provide more complete bug reproduction steps to quickly locate and identify the bug. Conversely, the low-quality bug report may affect inspection progress. To predict whether a test report should be selected for inspection within limited resources, we propose a new framework named CTRQS to automatically model the quality of crowdsourced test reports. We summarize the desirable properties and measurable quality indicators of crowdsourced test reports and innovatively propose analytical indicators based on dependency parsing to better determine the quality of crowd sourced test reports. We use rules to achieve quality indicators. Experiments conducted over five crowdsourced test report datasets of mobile applications show that CTRQS can effectively judge the quality problems in test reports and correctly predict the quality of test reports with an accuracy of up to 88%.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.