Abstract

Collating vast test reports is a time-consuming and laborious task in crowdsourced testing. Crowdsourced test reports are usually presented in two ways, one as text and the other as images, which have symmetrical content. Researchers have proposed many text- and image-based methods for prioritizing crowdsourced test reports of mobile applications. However, crowdsourced test reports of web applications typically have clearer textual descriptions of errors but noisier screenshots of errors. This gap motivates us to propose a method for prioritizing crowdsourced test reports of web applications to detect all errors earlier. In this paper, we integrate text and image information from test reports to enhance the analysis process. First, we use the natural language processing (NLP) technique to extract textual features of error descriptions and then symmetrically extract image features of error screenshots, i.e., we use the optical character recognition (OCR) technique to obtain textual information in the screenshots and then also use the NLP technique to extract features. To validate our approach, we conduct experiments on 717 test reports. The experimental results show that our method has a higher APFD (average percentage fault detection) and shorter runtime than state-of-the-art prioritization methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call