Abstract

With the advent of high-resolution surveillance equipment, fire detection aimed at obtaining more detailed information has drawn considerable attention. The methods are based on convolutional neural networks (CNNs), which have been widely applied to automatically extract fire detection image features. However, the existing CNN-based fire detection methods are only designed for fixed-scale images. Thus, these methods are still difficult to use for fire detection due to the scale variation in the fire object and are infeasible for satisfying the requirement of various hardware of different scale images. In this paper, a fire disaster detection method that can deal with varied-scale images is proposed. First, the dense connection is used to enhance the information flow between different layers. Then, the groups channel attention is utilized to recalibrate the features. Finally, multiscale spatial feature pooling is employed to fuse different scale features. Specifically, the module allows us to predict different scale images. Experimental results demonstrate that the proposed method achieves 91.4 accuracy using fixed scale training, and 92.4 accuracy using multiscale training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call