Abstract

Convolutional Neural Networks (CNNs) based approaches are popular for various image/video related tasks due to their state-of-the-art performance. However, for problems like object detection and segmentation, CNNs still suffer from objects with arbitrary shapes, sizes, occlusions, and varying viewpoints. This problem makes it mostly unsuitable for fire detection and segmentation since flames can have an unpredictable scale and shape. In this paper, we propose a method that detects and segments fire-regions with special considerations of their arbitrary sizes and shapes. Specifically, our approach uses a self-attention mechanism to augment spatial characteristics with temporal features, allowing the network to reduce its reliance on spatial factors like shape or size and take advantage of robust spatial-temporal dependencies. As a whole, our pipeline has two stages: In the first stage, we take out region proposals using Spatial-Temporal features, and in the second stage, we classify whether each region proposal is flame or not. Due to the scarcity of generous fire datasets, we adopt a transfer learning strategy to pre-train our classifier with the ImageNet dataset. Additionally, our Spatial-Temporal Network only requires semi-supervision, where it only needs one ground-truth segmentation mask per frame-sequence input. The experimental results of our proposed method significantly outperform the state-of-the-art fire detection with a 2 ~ 4% relative enhancement in F1-score for large scale fires and a nearly ~ 60% relative improvement for small fires at a very early stage.

Highlights

  • A CCORDING to a National Fire Protection Association report [1], in 2018, approximately 1, 318, 500 fire disasters occurred in the United States, causing 3655 deaths, 15200 injuries, and damages worth $25.6 billion

  • Densefire (96.9% accuracy for the NTUST dataset and 80.3% for small-sized fire dataset), with a 50% lower floating-point operations (FLOPs) count than CNNFire

  • The accuracy of DenseFire is higher by 8.1% on the NTUST dataset and 15.2% on the small-sized fire dataset compared to ShuffelNet

Read more

Summary

Introduction

A CCORDING to a National Fire Protection Association report [1], in 2018, approximately 1, 318, 500 fire disasters occurred in the United States, causing 3655 deaths, 15200 injuries, and damages worth $25.6 billion This problem motivated several works towards fire detection systems, categorized into two classes: sensor-based technologies and image-based approaches. Popular sensor-based fire detection technologies include smoke detectors, thermometers, or ultraviolet light sensors While these cheap and widely available technologies rely on particle sampling that makes their performance hypersensitive to its location and proximity to the fire. Image-based methods offer more information than sensor-based technologies It can localize the fire, measure its intensity, and track its growth.

Objectives
Methods
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.