Abstract

Fire is one of the most dangerous disasters threatening human life and property globally. In order to reduce fire losses, researches on video analysis for early smoke detection have become particularly significant. However, it is still a challenging task to extract stable features for smoke recognition, largely due to its variations in color, shapes and texture. Classical convolutional neural networks can automatically learn feature representations of appearance from a single frame but fail to capture motion information between frames. For addressing this issue, in this paper, we propose a spatial-temporal based convolutional neural network for video smoke detection, and for real-time detection, propose an enhanced architecture, which utilizes a multitask learning strategy to jointly recognize smoke and estimate optical flow, capturing intra-frame appearance features and inter-frame motion features simultaneously. The effectiveness and efficiency of our proposed method is validated by experiments carried out on our self-created dataset, which achieves 97.0% detection rate and 3.5% false alarm rate with processing time of 5ms per frame, obviously outperforming existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call