ABSTRACT Early detection of a fire during its development stage is highly desired to minimize human and material losses. The improvement of the efficiency of these monitoring systems through the analysis of fire video data drawn from unmanned aerial vehicles is considered . Classification of video images when monitoring a fire event on the territory is used based on their segmentation into identical rectangular segments of a given size and then assigning them to one of three classes: Smoke, Flame, and Indifferent classes. The Walsh-Hadamard transform was used to form descriptors for the Weak classifiers. Descriptors were calculated for three Weak classifiers; first, the Walsh-Hadamard transform was calculated for the window of the entire segment, and its spectral coefficients were used for the first Weak classifier. Next, the descriptors were calculated in windows whose sizes were half and a quarter of the original window size. The classifier consisted of three independently-trained neural networks. A simple ensemble averaging block was used to combine the outputs of those networks. A special software code has been developed to classify video images. The code allows us to create a database of segments of the ‘Smoke’ and ‘Flame’ classes, determine the two-dimensional Walsh-Hadamard spectrum of video image segments, train fully connected neural networks, conduct exploratory analysis and evaluate the relevance of the calculated two-dimensional spectral coefficients. For validation we used real data from Сlosed Circuit Television cameras presented in open databases. The experimental studies on the classification of video data containing flames and smoke showed that the proposed method has an average smoke detection accuracy of 86% and an average flame detection accuracy of 89.5%. Errors II in smoke detection and flame detection had the averages of 13% and 4.5%, respectively.
Read full abstract