With the advancement of society and the rapid urbanization process, there is an escalating need for effective fire detection systems. This study endeavors to bolster the efficacy and dependability of fire detection systems in intricate settings by refining the existing You Only Look Once version 5 (YOLOv5) algorithm and introducing algorithms grounded on fire characteristics. Primarily, the Convolutional Block Attention Module (CBAM) attention mechanism is introduced to steer the model towards substantial features, thereby amplifying detection precision. Subsequently, a multi-scale feature fusion network, employing the Adaptive Spatial Feature Fusion Module (ASFF), is embraced to proficiently amalgamate feature information from various scales, thereby enhancing the model’s comprehension of image content and subsequently fortifying detection resilience. Moreover, refining the loss function and integrating a larger detection head further fortify the model’s capability to discern diminutive targets. Experimental findings illustrate that the refined YOLOv5 algorithm attains accuracy advancements of 8% and 8.2% on standard and small target datasets, respectively. To ascertain the practical viability of the refined YOLOv5 algorithm, this study introduces a temperature-based flame detection algorithm. By amalgamating and deploying both algorithms, the ultimate experimental outcomes reveal that the integrated algorithm not only elevates accuracy but also achieves a frame rate of 57 frames, aligning with the prerequisites for practical deployment.
Read full abstract