Flame detection is of great significance in a fire prevention system. YOLOv4 has poor real-time performance on flame detection caused by the complex structure and high parameter size. To address this problem, a novel flame detection framework, YOLO for flame (YOLO-F), is proposed in this paper. The backbone of YOLOv4 is simplified from the original 53 convolutional layers to 34 convolutional layers to reduce the number of parameters by simplifying the structure of the CSPBlock. Based on the FPN, an effective and light-weight feature pyramid architecture, namely FPNs-SE, is then proposed and the neck part of YOLOv4 is replaced by FPNs-SE to enhance the feature extraction ability of different scales. In addition, the CIoU loss in the YOLOv4 ignores the similarity measure of the area between the predicted bounding box and the ground-truth bounding box. An effective loss named ACIoU is proposed in this paper in order to handle the above issue and further improve the detection accuracy. The proposed methods are tested on FLAME dataset and network crawled dataset, respectively. The mAP, recall, and precision of YOLO-F are higher by 2.01%, 4.0%, 2.0% on average than those of YOLOv4. With input size of [Formula: see text] and on a single GTX 1660, the operating speed of our method can reach 24.53[Formula: see text]fps, which is improved by 38.04% compared with YOLOv4. The experimental results show that our method is more robust to the small flame and flame-like objects and can achieve the best balance of detection speed and accuracy. The code is made available at https://github.com/Windxy/YOLO-F .
Read full abstract