Global climate change has triggered frequent extreme weather events, leading to a significant increase in the frequency and intensity of forest fires. Traditional fire monitoring methods such as manual inspections, sensor technologies, and remote sensing satellites have limitations. With the advancement of drone technology and deep learning, using drones combined with artificial intelligence for fire monitoring has become mainstream. This paper proposes an improved YOLOv8-based model that incorporates local convolution instead of full convolution in the C2F module and integrates the EMA module to enhance the feature channel interaction modeling capability and contextual information utilization, thereby reducing model complexity and increasing efficiency. Additionally, in order to address the risk of false positives and missed detections caused by vegetation, terrain, and lighting changes in forests, we have introduced the AgentAttention module in the Backbone. This module combines Softmax and linear attention to optimize feature extraction, improving the model’s accuracy and robustness. Furthermore, in order to tackle the challenges of detecting flames and smoke at different scales and angles, we have designed the BiFormer module, which adaptively fuses global and local features, significantly enhancing the model’s multi-scale and multi-angle detection capability. Experimental results show that the improved model achieves Precision and Recall of 93.57% and 88.51%, respectively, representing improvements of 5.05% and 2.72% over the original model. It also optimizes FPS, GFLOPs, and Params by 14.3%, 25%, and 19.7%, respectively. This research has significant application prospects in forest fire early warning, emergency response, and loss reduction, while also providing strong technical support for forest resource protection and public safety.
Read full abstract