Abstract

Abstract. The use of remote sensing data for burned area mapping hast led to unprecedented advances within the field in recent years. Although threshold and traditional machine learning based methods have successfully been applied to the task, they implicate drawbacks including the involvement of complex rule sets and requirement of previous feature engineering. In contrast, deep learning offers an end-to-end solution for image analysis and semantic segmentation. In this study, a variation of U-Net is investigated for mapping burned areas in mono-temporal Sentinel-2 imagery. The experimental setup is divided into two phases. The first one includes a performance evaluation based on test data, while the second serves as a use case simulation and spatial evaluation of training data quality. The former is especially designed to compare the results between two local (trained only with data from the respective research areas) and a global (trained with the whole dataset) variant of the model with research areas being Indonesia and Central Africa. The networks are trained from scratch with a manually generated customized training dataset. The application of the two variants per region revealed only slight superiority of the local model (macro-F1: 92%) over the global model (macro-F1: 91%) in Indonesia with no difference in overall accuracy (OA) at 94%. In Central Africa, the results of the global and local model are the same in both metrics (OA: 84%, macro-F1: 82%). Overall, the outcome demonstrates the global model’s ability to generalize despite high dissimilarities between the research areas.

Highlights

  • Fire is a natural and ecologically relevant process in many ecosystems (Kelly & Brotons, 2017)

  • With a combined revisit time of 5 days considering both missions (Sentinel-2A and Sentinel-2B) and the included Near Infrared (NIR) and Shortwave Infrared (SWIR) spectral bands that are especially sensitive to fire effects (Pleniou & Koutsias, 2013) it can facilitate the creation of a burned area product with 10m resolution allowing improved post-fire evaluations on ecosystem damage and carbon emission

  • Beside Knopp et al (2020), who achieved high overall accuracies in selected locations across the globe, Farasin et al (2020) proved the superior performance of this combination of network and sensor over conventional methods in a global case study of 147 cloud free areas. While these existing models are either trained and applied in a local or a global setting, this study investigates both by looking at the generalizability of a global model in comparison to local models using the example of two environmentally different research areas - Indonesia and Central Africa (Chad and Central African Republic)

Read more

Summary

Introduction

Fire is a natural and ecologically relevant process in many ecosystems (Kelly & Brotons, 2017). Most prevalent in literature is the use of Landsat data in regional studies and MODIS data for the development of global burned area products While they present an important source of information for multiple user communities, it has been shown that especially in areas prone to smaller fires, the low spatial resolution can lead to an underestimation of total burned area (van der Werf et al, 2017). Convolutional neural networks (CNNs) show the most promising potential as they are designed to handle spatially dependent data such as images especially well (LeCun et al, 1998) Despite that, their share in solutions for burned area applications is comparatively low, which might be based on the limited amount of labelled training examples that is available. Other small studies show promising results on the combination of U-Net and Sentinel- 2 data for burned area

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call