Abstract

Video-based smoke detection plays an important role in the fire detection community. Such interesting topic, however, always suffers from great challenge due to the large variances of smoke texture, shape and color in the real applications. To effectively exploiting the long-range motion context, we propose a novel video-based smoke detection method via Recurrent Neural Networks (RNNs). More concretely, the proposed method first captures the space and motion context information by using deep convolutional motion-space networks. Then a temporal pooling layer and RNNs are used to effectively train the smoke model. Finally, to promote further research and evaluation of video-based smoke models, we also construct a new large database of 3000 challenging smoke video clips that cover large variations in illuminance and weather conditions. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.