Abstract

An improved EA‐YOLO object detection algorithm based on YOLOv5 is proposed to address the issues of drastic changes in target scale, low detection accuracy, and high miss rate in unmanned aerial vehicle aerial photography scenarios. Firstly, a DFE module was proposed to improve the effectiveness of feature extraction and enhance the whole model's ability to learn residual features. Secondly, a CWFF architecture was introduced to enable deeper feature fusion and improve the effectiveness of feature fusion. Finally, in order to solve the traditional algorithm's shortcomings it is difficult to detect small targets. We have designed a novel SDS structure and adopted a strategy of reusing low‐level feature maps to enhance the network's ability to detect small targets, making it more suitable for detecting some small objects in drone images. Experiments in the VisDrone2019 dataset demonstrated that the proposed EA‐YOLOs achieved an average accuracy mAP@0.5 of 39.9%, which is an 8% improvement over YOLOv5s, and mAP@0.5:0.95 of 22.2%, which is 5.2% improvement over the original algorithm. Compared with YOLOv3, YOLOv5l, and YOLOv8s, the mAP@0.5 of EA‐YOLOs improved by 0.9%, 1.8%, and 0.6%, while the GFLOPs decreased by 86.4%, 80.6%, and 26.7%. © 2024 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.