Abstract
Collating vast test reports is a time-consuming and laborious task in crowdsourced testing. Crowdsourced test reports are usually presented in two ways, one as text and the other as images, which have symmetrical content. Researchers have proposed many text- and image-based methods for prioritizing crowdsourced test reports of mobile applications. However, crowdsourced test reports of web applications typically have clearer textual descriptions of errors but noisier screenshots of errors. This gap motivates us to propose a method for prioritizing crowdsourced test reports of web applications to detect all errors earlier. In this paper, we integrate text and image information from test reports to enhance the analysis process. First, we use the natural language processing (NLP) technique to extract textual features of error descriptions and then symmetrically extract image features of error screenshots, i.e., we use the optical character recognition (OCR) technique to obtain textual information in the screenshots and then also use the NLP technique to extract features. To validate our approach, we conduct experiments on 717 test reports. The experimental results show that our method has a higher APFD (average percentage fault detection) and shorter runtime than state-of-the-art prioritization methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.