Abstract

The object detection task is to locate and classify objects in an image. The current state-of-the-art high-accuracy object detection algorithms rely on complex networks and high computational cost. These algorithms have high requirements on the memory resource and computing capability of the deployed device, and are difficult to apply to mobile and embedded devices. Through the depthwise separable convolution and multiple efficient network structures, this paper designs a lightweight backbone network and two different multiscale feature fusion structures, and proposes a lightweight one-stage object detection algorithm—MiniYOLO. With the model size of only 4.2 MB, MiniYOLO still maintains a high detection accuracy, realizing the trade-off between the model size and detection accuracy. Experimental results on MS COCO 2017 data set show that compared to the state-of-the-art PP-YOLO-tiny, MiniYOLO achieves higher mAP with the same model size. Compared with other lightweight object detection algorithms, MiniYOLO has certain advantages in detection accuracy or model size. The code associated with this paper can be downloaded from https://github.com/CaedmonLY/MiniYOLO/.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.