Abstract

Abstract. Explainable machine learning has recently gained attention due to its contribution to understanding how a model works and why certain decisions are made. A so far less targeted goal, especially in remote sensing, is the derivation of new knowledge and scientific insights from observational data. In our paper, we propose an explainable machine learning approach to address the challenge that certain land cover classes such as wilderness are not well-defined in satellite imagery and can only be used with vague labels for mapping. Our approach consists of a combined U-Net and ResNet-18 that can perform scene classification while providing at the same time interpretable information with which we can derive new insights about classes. We show that our methodology allows us to deepen our understanding of what makes nature wild by automatically identifying simple concepts such as wasteland that semantically describes wilderness. It further quantifies a class’s sensitivity with respect to a concept and uses it as an indicator for how well a concept describes the class.

Highlights

  • Machine learning (ML) methods are successfully used in remote sensing for various tasks such as classification, detection, or parameter prediction

  • In order to get closer to such goals, explainable ML has been strongly promoted in research in recent years

  • To make a step in the direction of better land cover mapping, we propose an explainable ML approach to deepen our understanding of what makes nature wild and derive novel insights about this land cover class

Read more

Summary

Introduction

Machine learning (ML) methods are successfully used in remote sensing for various tasks such as classification, detection, or parameter prediction. Besides the actual solving of the application task and the mere learning of relationships between observed data and the desired output, a recent but not yet widespread use of ML is the derivation of new scientific knowledge (Roscher et al, 2020a). (Roscher et al, 2020b) discuss first works in this direction and show that explainability is often used to align the models with existing knowledge, for example, to improve models and to correct obvious flaws in case of wrong decisions. To this point, explainable ML has been used less to uncover previously unknown patterns and to derive novel scientific insights. We see a high relevance for mapping wilderness areas using remote sensing observations, as this can be an important source of information for stakeholders in the context of establishing new protected areas

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call