Abstract

Vision sensors-based fire detection is an interesting and useful research domain with significant alleviated attention from computer vision experts. The baseline research is based on low-level color features, lately replaced by the effective representation of deep models, achieving better accuracy, but higher false alarm rates still exist with expensive computations. Furthermore, the current feed-forward neural networks initialize and allocate the weights according to the input shape, posing a vanishing gradient problem with slow convergence speed. The main challenges associated with fire detection are the limited performance of the developed models in terms of accuracy, higher false alarm rates, higher computational complexity, and vanishing gradient problems for very deep network architectures. To tackle these issues, herein, we introduce Stacked Encoded-EfficientNet (SE-EFFNet), a cost-aware deep model with reduced false alarm rates and better fire recognition abilities. SE-EFFNet uses a lightweight EfficientNet as a backbone to extract useful features that are further refined using stacked autoencoders before the final classification decision. The stacked autoencoder in SE-EFFNet is not linearly connected, but we use dense connections to ensure effective fire scene recognition, where the weights are randomly initialized to solve the vanishing gradient problems and provide fast convergence speed. The experimental evaluation using benchmarks against recent state-of-the-art demonstrates better recognition abilities of SE-EFFNet and flexible inferencing potentials toward edge devices. We also offer evaluation using favorable lightweight models before selecting the optimal SE-EFFNet.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.