Smoke has visually elusive appearances, especially in low-light conditions, so it is quite difficult to quickly and accurately detect smoke from images. To address these challenges, we design a dual-encoder structure of Transformer and Convolutional Neural Network (CNN) to propose an effective Multi-scale Interactive Fusion Network (MIFNet) for smoke image segmentation. To enhance the presentation of features, we propose a Local Feature Enhancement Propagation (LFEP) module to enhance spatial details. To optimize global and local features for efficient fusion, we integrate LFEP into the original Transformer to replace the traditional multi-head self-attention mechanism. Then, we propose a Multi-level Attention Coupled Module (MACM) to fuse Transformer and CNN features of the dual-encoder. MACM can flexibly focus on information interaction between different levels of two encoding paths. Finally, we design a Prior-guided Multi-scale Fusion Decoder (PMFD), which combines prior knowledge with a multi-scale feature fusion strategy to improve the performance of segmentation. Experimental results demonstrate that MIFNet substantially outperforms the state-of-the-art methods. MIFNet achieves a mean Intersection over Union (mIoU) of 81.6% on the synthetic smoke (SYN70K) dataset, and a remarkable accuracy of 98.3% on the forest smoke dataset.
Read full abstract