Abstract

Effective response to fire requires accurate and timely information of its evolution. In order to accomplish this valuable fire analysis step, this work fuses low-cost video fire detection results of multiple cameras using a novel multi-view localization framework. As such, valuable fire characteristics are detected at the early stage of the fire. The framework merges the single-view detection results of the multiple cameras by homographic projection onto multiple horizontal and vertical planes, which slice the scene. The crossings of these slices create a 3D grid of virtual sensor points, called the FireCube. Using this grid and subsequent spatial and temporal 3D clean-up filters, information about the location of the fire, its size and its direction of propagation can be instantly extracted from the video data. The novel aspect in the proposed framework is the 3D grid creation, which is a 3D extension of multiple plane homography. Also the use of spatial and temporal 3D filters, which extend existing 2D filter concepts, provides a more reliable fire analysis. Experimental results indicate that the proposed multi-view fire localization framework is able to accurately detect and localize the fire. Two cameras are already sufficient to achieve a dimension accuracy of 90% and a position accuracy of 98%. By further increasing the number of cameras it is even possible to achieve a dimension accuracy of 96% and a position accuracy of 99%. Furthermore, the experiments show that increasing the number of cameras to monitor the scene has a positive effect on the detection rate. The gain of using four cameras instead of one is 3%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call