Abstract

AbstractThe existing YOLOv5‐based framework has achieved great success in the field of target detection. However, in forest fire detection tasks, there are few high‐quality forest fire images available, and the performance of the YOLO model has suffered a serious decline in detecting small‐scale forest fires. Making full use of context information can effectively improve the performance of small target detection. To this end, this paper proposes a new graph‐embedded YOLOv5 forest fire detection framework, which can improve the performance of small‐scale forest fire detection using different scales of context information. To mine local context information, we design a spatial graph convolution operation based on the message passing neural network (MPNN) mechanism. To utilize global context information, we introduce a multi‐head self‐attention (MSA) module before each YOLO head. The experimental results on FLAME and our self‐built fire dataset show that our proposed model improves the accuracy of small‐scale forest fire detection. The proposed model achieves high performance in real‐time performance by fully utilizing the advantages of the YOLOv5 framework.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call