Fire detection is crucial in the protection of human life and property. Traditional methodologies and deep learning techniques have been extensively employed in this area, yet they often fall short due to the considerable variability in the shape, size, and intensity of flames. Conventional approaches lean on predefined fire characteristics, failing to adapt sufficiently across a broad spectrum of fire conditions. Simultaneously, deep learning techniques may struggle with managing diverse and unusual flame attributes. In response to these challenges, we introduce a novel framework that synergizes traditional methods with deep learning techniques—the Fire Segmentation-Detection Framework (FSDF). FSDF enhances flame feature detection by extracting color and texture information from images, utilizing Hue, Saturation, and Value (HSV), and the Complete Local Binary Pattern (CLBP). In addition, we weave YOLOv8 and Vector Quantized Variational Autoencoders (VQ-VAE) into the fabric of our framework to facilitate image segmentation and carry out unsupervised fire detection, respectively. To gauge the accuracy and robustness of our proposed method, we implemented a comprehensive assessment using a dataset constructed from real-world forest and urban fires. Experimental results unequivocally demonstrate the competitive edge of our approach over some baseline methods. For instance, in contrast to YOLOv8, our framework has bolstered precision, recall, and F-score by 19.5%, 1.2%, and 11.7% respectively. Finally, we conducted extensive field tests by deploying a robot with the relevant algorithm in an actual fire scenario, further emphasizing our dedication to real-world application. These experiments underline not only the performance of the method, but also its potential for practical deployment.
Read full abstract