Abstract
A forest fire is a natural disaster characterized by rapid spread, difficulty in extinguishing, and widespread destruction, which requires an efficient response. Existing detection methods fail to balance global and local fire features, resulting in the false detection of small or hidden fires. In this paper, we propose a novel detection technique based on an improved YOLO v5 model to enhance the visual representation of forest fires and retain more information about global interactions. We add a plug-and-play global attention mechanism to improve the efficiency of neck and backbone feature extraction of the YOLO v5 model. Then, a re-parameterized convolutional module is designed, and a decoupled detection head is used to accelerate the convergence speed. Finally, a weighted bi-directional feature pyramid network (BiFPN) is introduced to merge feature information for local information processing. In the evaluation, we use the complete intersection over union (CIoU) loss function to optimize the multi-task loss for different kinds of forest fires. Experiments show that the precision, recall, and mean average precision are increased by 4.2%, 3.8%, and 4.6%, respectively, compared with the classic YOLO v5 model. In particular, the mAP@0.5:0.95 is 2.2% higher than the other detection methods, while meeting the requirements of real-time detection.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have