Abstract

Traditional unmanned aerial vehicle (UAV) detection methods struggle with multi-scale variations during flight, complex backgrounds, and low accuracy, whereas existing deep learning detection methods have high accuracy but high dependence on equipment, making it difficult to detect small UAV targets efficiently. To address the above challenges, this paper proposes an improved lightweight high-precision model, YOLOv8-E (Enhanced YOLOv8), for the fast and accurate detection and identification of small UAVs in complex environments. First, a Sobel filter is introduced to enhance the C2f module to form the C2f-ESCFFM (Edge-Sensitive Cross-Stage Feature Fusion Module) module, which achieves higher computational efficiency and feature representation capacity while preserving detection accuracy as much as possible by fusing the SobelConv branch for edge extraction and the convolution branch to extract spatial information. Second, the neck network is based on the HSFPN (High-level Screening-feature Pyramid Network) architecture, and the CAA (Context Anchor Attention) mechanism is introduced to enhance the semantic parsing of low-level features to form a new CAHS-FPN (Context-Augmented Hierarchical Scale Feature Pyramid Network) network, enabling the fusion of deep and shallow features. This improves the feature representation capability of the model, allowing it to detect targets of different sizes efficiently. Finally, the optimized detail-enhanced convolution (DEConv) technique is introduced into the head network, forming the LSCOD (Lightweight Shared Convolutional Object Detector Head) module, enhancing the generalization ability of the model by integrating a priori information and adopting the strategy of shared convolution. This ensures that the model enhances its localization and classification performance without increasing parameters or computational costs, thus effectively improving the detection performance of small UAV targets. The experimental results show that compared with the baseline model, the YOLOv8-E model achieved (mean average precision at IoU = 0.5) an mAP@0.5 improvement of 6.3%, reaching 98.4%, whereas the model parameter scale was reduced by more than 50%. Overall, YOLOv8-E significantly reduces the demand for computational resources while ensuring high-precision detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.