Abstract

Because the YOLOv4 model is unsuitable for the mobile and embedded terminals, YOLOv4′s lightweight MobileNetv3-YOLOv4 network significantly decreases the detection accuracy of dense silkworm targets, and the accuracy loss is too significant. A lightweight YOLOv4 detection algorithm (KM-YOLOv4) improved by multi-scale feature fusion is proposed for the target detection of dense silkworms. The Kmeans algorithm reconstructs anchor boxes suitable for different objects to enhance detection accuracy. By adding multi-scale feature fusion, the improved deep learning separable convolution MobileNetV3 lightweight backbone network replaces the YOLOv4 backbone network, reducing the computational load and model scale of the backbone network and making up for the light part of the depthwise separable convolution Accuracy loss, which improves the detection accuracy of lightweight models. The experimental results with the dense silkworm formation dataset show that the KM-YOLOv4 algorithm significantly reduces the model size by about 74% compared with the YOLOv4 algorithm and improves the detection accuracy by 1.82% with the unimproved MobileNetv3-YOLOv4 algorithm. The model can be better applied to mobile and embedded.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.