Abstract

Recently, as the development cycle of applications has been shortened, it is important to develop rapid and accurate application testing technology. Since application testing requires a lot of cost, mobile component detection technology using deep learning is essential to prevent the use of expensive human resources. In this paper, we shall propose a Clickable Object Detection Network (CODNet) for mobile component detection in a wide range of mobile screen resolutions. CODNet consists of three modules: feature extraction, deconvolution and prediction modules in order to provide performance improvement and scalability. The Feature Extraction module uses squeeze and excitation blocks to efficiently extract features and change the ratio of the input image to 1:2 most close to that of mobile screen. Deconvolution module provides feature map of various sizes by upsampling feature map through top-down pathway and lateral connections. The prediction module selects an anchor size suitable for the mobile environment using the Anchor Transfer block, among the set of anchor candidates obtained through the analysis of mobile dataset. Moreover, we shall improve object detection performance by building a new mobile screen dataset consisting of data collected from various resolutions and operating systems. We shall show that our model achieves competitive performance in mean average precision on our dataset compared to the other models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call