Abstract

Effective fire detection using vision sensors is a widely accepted challenge in smart cities and rural areas, where forest and building fires significantly contribute to the loss of human lives and properties. Early fire detection using deep learning techniques is emerged to be an effective solution using close-circuit television (CCTV) in smart cities, but it has limited coverage in huge building infrastructures and urban forests. Unmanned Aerial Vehicles (UAV) cover wide areas, but fire detection in visual data captured from UAVs is a challenging task. Therefore, we employ deep multi-scale features from a backbone model and apply attention mechanism for accurate fire detection. The deep features from intermediate layers capture fire regions using spatial object edges information and final layers extract image global representations. The features fusion ensures to represent the image effectively, where the fused features are enhanced using multi-headed self-attention to highlight the most important fire regions. Preliminary experimental results (https://github.com/tanveer-hussain/DMFA-Fire) using UAV fire detection dataset demonstrate effective performance of the proposed model against rivals and consequently present a new deep model's perspective to consider multi layer features for accurate detection performance, thereby providing effective applicability in smart cities environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call