Abstract
Forest fires are a devastating natural disaster. Existing fire detection models face limitations in dataset availability, multi-scale feature extraction, and locating obscured or small flames and smoke. To address these issues, we develop a dataset containing real and synthetic forest fire images, sourced from a UAV (Unmanned Aerial Vehicle) perspective. Additionally, we propose the Ghost Convolution Swin Transformer (GCST) module to extract multi-scale flame and smoke features from different receptive fields by integrating parallel Ghost convolution and Swin Transformer. Subsequently, we design a lightweight reparameterized rotation attention module, which captures interactions across channel and spatial dimensions to suppress background noise and focus on obscured flames and smoke. Finally, we introduce a loss function, called Efficient Auxiliary Geometric Intersection over Union (EAGIoU), which employs an auxiliary bounding box to accelerate the model's convergence while integrating the geometrical principles of the predicted and real bounding boxes to accurately locate small flames and smoke. Extensive experimental results demonstrate that our method achieves 75.2 % mAP@0.5 and 42 % mAP@0.5:0.95 with a frame rate of 239 frames per second, indicating a significant improvement in accuracy and real-time performance compared to state-of-the-art techniques. The code and datasets are available at https://github.com/luckylil/forest-fire-detection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.