Abstract

In order to evaluate the effects of forest fires on the dynamics of the function and structure of ecosystems, it is necessary to determine burned forest areas with high accuracy, effectively, economically, and practically using satellite images. Extraction of burned forest areas utilizing high-resolution satellite images and image classification algorithms and assessing the successfulness of varied classification algorithms has become a prominent research field. This study aims to indicate on the capability of the deep learning-based Stacked Autoencoders method for the burned forest areas mapping from Sentinel-2 satellite images. The Stacked Autoencoders, used in this study as an unsupervised learning method, were compared qualitatively and quantitatively with frequently used supervised learning algorithms (k-Nearest Neighbors (k-NN), Subspaced k-NN, Support Vector Machines, Random Forest, Bagged Decision Tree, Naive Bayes, Linear Discriminant Analysis) on two distinct burnt forest zones. By selecting burned forest zones with contrasting structural characteristics from one another, an objective assessment was achieved. Manually digitized burned areas from Sentinel-2 satellite images were utilized for accuracy assessment. For comparison, different classification performance and quality metrics (Overall Accuracy, Mean Squared Error, Correlation Coefficient, Structural Similarity Index Measure, Peak Signal-to-Noise Ratio, Universal Image Quality Index, and KAPPA metrics) were used. In addition, whether the Stacked Autoencoders method produces consistent results was examined through boxplots. In terms of both quantitative and qualitative analysis, the Stacked Autoencoders method showed the highest accuracy values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call