Abstract

Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)—a machine learning model—and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model’s decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model.

Highlights

  • Using remote-sensing imagery for damage mapping dates back to the San Francisco earthquake in 1906, when a number of kites were used as aerial platforms to capture images from the affected area [1]

  • Remote-sensing-based damage mapping is limited to the visual interpretation of optical satellite images in real-world scenarios, which is labour-intensive and time-consuming [7,8]

  • The results from this study showed improvement in accuracy as more datasets from both SAR and optical images were added to the learning process

Read more

Summary

Introduction

Using remote-sensing imagery for damage mapping dates back to the San Francisco earthquake in 1906, when a number of kites were used as aerial platforms to capture images from the affected area [1]. A preliminary attempt to use satellite images in the seismic field was probably in the study conducted in 1972 to investigate the cause of the 1964 Alaska earthquake [2]. This makes post-earthquake damage mapping one of the oldest applications of remote-sensing images. This topic is challenging in terms of automatic damage assessment, and it is still the subject of active research [3,4,5,6]. There is an interest in automatic post-earthquake damage mapping. One reason that hinders AI’s reliable use in realworld applications is the lack of explainability of AI models to justify the AI decisions [13]

Objectives
Methods
Results
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.