Abstract

Crowdsourced testing has become a popular mobile application testing method, and it is capable of simulating real usage scenarios and detecting various bugs with a large workforce. However, inspecting and classifying the overwhelming number of crowdsourced test reports has become a time-consuming yet inevitable task. To alleviate such tasks, in the past decades, software engineering researchers have proposed many automatic test report classification techniques. However, these techniques may become less effective for crowdsourced mobile application testing, where test reports often consist of insufficient text descriptions and rich screenshots and are fundamentally different from those of traditional desktop software. To bridge the gap, we firstly fuse features extracted from text descriptions and screenshots to classify crowdsourced test reports. Then, we empirically investigate the effectiveness of our feature fusion approach under six classification algorithms, namely Naive Bayes (NB), k-Nearest Neighbors (kNN), Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and Convolutional Neural Network (CNN). The experimental results on six widely used applications show that (1) SVM with fused features can outperform others in classifying crowdsourced test reports, and (2) image features can improve the test report classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.