Abstract
Aiming at the existing UAV fire detection system with low small target detection accuracy, a high leakage rate, and a slow rate, an improved YOLOv5 UAV flame detection algorithm is proposed. First, the anchor box clustering is optimized using the K-mean++algorithm to reduce the classification error rate. Second, the original backbone network is enhanced with the CBAM attention mechanism, which scans the whole globe to obtain the target area with a high weighting proportion and needs to be focused on. Replace the PANet network with the BiFPN network in the neck and introduce jump connections when performing feature fusion, which can better retain the semantic information of high-level and low-level features. Finally, the α-IoU loss function is added to achieve the regression accuracy of different levels of the bounding box by modulating α, which improves the detection accuracy of small datasets and the robustness to noise. According to the experimental results, using a randomly segmented dataset, the modified YOLOv5 algorithm obtains a mAP value of 80.2%, which is 6.7% higher than the original YOLOv5 method, while maintaining an FPS of 64 frames per second. The method helps to improve the accuracy of UAVs for fire monitoring, and the performance is better than the existing flame detection algorithms, which meet the requirements of practical applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.