Abstract

Indoor fires can easily cause property damage and especially serious casualties. Early and timely fire detection helps firefighters make scientific judgments on the cause of fires, thereby effectively controlling fire accidents. However, most of the existing computer-vision-based fire detection methods are only able to detect a single case of flame or smoke. In this paper, a tailored deep-learning-based scheme is designed to simultaneously detect flame and smoke objects in indoor scenes. We adopt the semantic segmentation architecture DeepLabv3+ as the main model, which is an encoder-decoder architecture for both the detection and segmentation of fire objects. Within this, the key module, e.g., atrous convolution, is integrated into the architecture to improve image resolution and accurately locate targets. In addition, to solve the question of an insufficient indoor fire dataset, we prepare and construct a new annotated dataset named the ‘Flame and Smoke Semantic Dataset (FSSD)’, which includes extensive semantic information of fire objects and is collected from real indoor scenes and other fire sources. Experiments conducted on our FSSD database and the comparisons with state-of-the-art methods (FCN, PSPNet, and DeepLabv3), confirm the high performance of the proposed scheme with 91.53% aAcc, 89.67% mAcc, and 0.8018 mIoU, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call