Abstract

With the rising frequency and severity of wildfires across the globe, researchers have been actively searching for a reliable solution for early-stage forest fire detection. In recent years, Convolutional Neural Networks (CNNs) have demonstrated outstanding performances in computer vision-based object detection tasks, including forest fire detection. Using CNNs to detect forest fires by segmenting both flame and smoke pixels not only can provide early and accurate detection but also additional information such as the size, spread, location, and movement of the fire. However, CNN-based segmentation networks are computationally demanding and can be difficult to incorporate onboard lightweight mobile platforms, such as an Uncrewed Aerial Vehicle (UAV). To address this issue, this paper has proposed a new efficient upsampling technique based on transposed convolution to make segmentation CNNs lighter. This proposed technique, named Reversed Depthwise Separable Transposed Convolution (RDSTC), achieved F1-scores of 0.78 for smoke and 0.74 for flame, outperforming U-Net networks with bilinear upsampling, transposed convolution, and CARAFE upsampling. Additionally, a Multi-signature Fire Detection Network (MsFireD-Net) has been proposed in this paper, having 93% fewer parameters and 94% fewer computations than the RDSTC U-Net. Despite being such a lightweight and efficient network, MsFireD-Net has demonstrated strong results against the other U-Net-based networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call