Abstract

Forest and land fires occur due to natural or manmade causes and have large impacts. Therefore, rapid burned area mapping is needed to investigate impact losses. Remote sensing satellite imagery is a prominent technology for rapid burned area mapping. However, optical data utilization becomes a challenge due to cloud cover. This study aims to evaluate the burned area model derived from optical data (optical-based classification), synthetic aperture radar (SAR) data (SAR-based classification), and combined SAR and optical data using random forest (RF), multilayer perceptron (MLP), and convolutional neural network (CNN) classifiers. SAR-based change detection parameters, such as radar burn ratio (RBR), radar burn difference (RBD), and gray-level co-occurrence matrix (GLCM) texture features, are used as the input features for RF and MLP classifiers. The results show that the CNN classifier outperforms RF and MLP in the case of optical-based classification and combined optical and SAR data classification, with accuracies of 99.73% and 99.86%, respectively. CNN classifiers are relatively not affected by the contribution of SAR data in cloud-free areas, as they give stable classification results in both classification schemes. Combined with the optical and SAR data classification and SAR-based classification, the contribution of GLCM texture features gives better classification results than using the RBR and RBD features for the areas affected by clouds using the RF and MLP classifiers. The contribution of GLCM texture features also significantly affects the MLP classifier more than the RF classifier in cloud-free areas in both classification schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call