Abstract

Aiming at the inconsistency of manual detection of mobile phone screen defects, the image feature extraction of traditional machine learning is often set based on experience, resulting in unsatisfactory detection results. Therefore, a mobile phone screen defect detection model (Ghostbackbone) which is proposed by this paper based on YOLOv5 s and Ghostbottleneck. The bottleneck of Ghostbackbone mainly uses and improves the Ghostbottleneck of GhostNet. The attention module of Ghostbackbone uses Coordinated Attention and Depthwise Separable Convolution for parameter reduction. Finally, Ghostbackbone uses YOLOv5 as the object detector to train the mobile phone screen defect dataset. The experimental results show that the parameter quantity of Ghostbackbone is 24% of that of YOLOv5 s, the average time of detecting a single picture is only 2% lower than that of YOLOv5 s, and the mAP0.5 : 0.95 is 2% higher than that of MobilenetV3 s.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call