Abstract

Wildfires are considered as one of the most disturbing factors in forest areas and high-density vegetation regions. The mapping of wildfires is particularly important for fire prediction and burned biomass estimation. Therefore, accurate and timely mapping of burned areas is of great importance and has a key role in disaster management. Estimation of burned areas from multi-spectral datasets is challenging, because of the complexity of the background and the different reflections of wildfires on the targets. To this end, this study presents a novel burned-area mapping framework based on fusion of multi-temporal Sentinel-1 coherence imagery and post-event Sentinel-2 imagery. The proposed framework uses hybrid quadratic morphological (QM) operations and convolution layers for deep feature extraction. The proposed architecture is known as the QMDNN-Net, where the overall framework of QMDNN-Net is based on a deep Siamese network. QMDNN-Net has double streams for extracting deep features from multi-temporal coherence data and from Sentinel-2 imagery. The streams are similar to each other, and have the same number of group-dilated convolution blocks and QM layers. QMDNN-Net is defined based on quadratic dilation and erosion, and then it takes the average of these as output. The results of wildfire mapping with QMDNN-Net are evaluated here with two real-world wildfire datasets based on Sentinel-1 and Sentinel-2 imagery. The results show that QMDNN-Net achieves an overall accuracy and Kappa coefficient of 95.5% and 0.9, respectively, and outperforms other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call