Abstract

Video-based smoke detection plays an important role in the fire detection community. Such interesting topic, however, always suffers from great challenge due to the large variances of smoke texture, shape and color in the real applications. To effectively exploiting the long-range motion context, we propose a novel video-based smoke detection method via Recurrent Neural Networks (RNNs). More concretely, the proposed method first captures the space and motion context information by using deep convolutional motion-space networks. Then a temporal pooling layer and RNNs are used to effectively train the smoke model. Finally, to promote further research and evaluation of video-based smoke models, we also construct a new large database of 3000 challenging smoke video clips that cover large variations in illuminance and weather conditions. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call