Abstract
Fires pose an enormous risk to human life and property. In the domain of fire warning, earlier approaches leveraging computer vision have achieved significant progress. However, these methods ignore the local and global motion characteristics of flames. To address this issue, a Local and Global Adaptive Enhancement (LGAE) network is proposed, which mainly includes the backbone block, the Local Adaptive Motion Enhancement (LAME) block, and the Global Adaptive Motion Enhancement (GAME) block. Specifically, the LAME block is designed to capture information about local motion, and the GAME block is devised to enhance information about global motion. Through the utilization of these two blocks, the fire recognition ability of LGAE is improved. To facilitate the research and development in the domain of fire recognition, we constructed a Large-scale Fire Video Recognition (LFVR) dataset, which includes 11,560 video clips. Extensive experiments were carried out on the LFVR and FireNet datasets. The F1 scores of LGAE on LFVR and FireNet were 88.93% and 93.18%, respectively. The experimental outcomes indicate that LGAE performs better than other methods on both LFVR and FireNet.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have