Abstract

Forest fires are highly unpredictable and extremely destructive. Traditional methods of manual inspection, sensor-based detection, satellite remote sensing and computer vision detection all have their obvious limitations. Deep learning techniques can learn and adaptively extract features of forest fires. However, the small size of the forest fire target in the long-range-captured forest fire images causes the model to fail to learn effective information. To solve this problem, we propose an improved forest fire small-target detection model based on YOLOv5. This model requires cameras as sensors for detecting forest fires in practical applications. First, we improved the Backbone layer of YOLOv5 and adjust the original Spatial Pyramid Pooling-Fast (SPPF) module of YOLOv5 to the Spatial Pyramid Pooling-Fast-Plus (SPPFP) module for a better focus on the global information of small forest fire targets. Then, we added the Convolutional Block Attention Module (CBAM) attention module to improve the identifiability of small forest fire targets. Second, the Neck layer of YOLOv5 was improved by adding a very-small-target detection layer and adjusting the Path Aggregation Network (PANet) to the Bi-directional Feature Pyramid Network (BiFPN). Finally, since the initial small-target forest fire dataset is a small sample dataset, a migration learning strategy was used for training. Experimental results on an initial small-target forest fire dataset produced by us show that the improved structure in this paper improves mAP@0.5 by 10.1%. This demonstrates that the performance of our proposed model has been effectively improved and has some application prospects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call