Abstract

Rapid mapping of event landslides is crucial to identify the areas affected by damages as well as for effective disaster response. Traditionally, such maps are generated with visual interpretation of remote sensing imagery (manned/unmanned airborne systems or spaceborne sensors) and/or using pixel-based and object-based methods exploiting data-intensive machine learning algorithms. Recent works have explored the use of convolutional neural networks (CNN), a deep learning algorithm, for mapping landslides from remote sensing data. These methods follow a standard supervised learning workflow that involves training a model using a landslide inventory covering a relatively small area. The trained model is then used to predict landslides in the surrounding regions. Here, we propose a new strategy, i.e., a progressive CNN training relying on combined inventories to build a generalized model that can be applied directly to a new, unexplored area. We first prove the effectiveness of CNNs by training and validating on event landslides inventories in four regions after earthquakes and/or extreme meteorological events. Next, we use the trained CNNs to map landslides triggered by new events spread across different geographic regions. We found that CNNs trained on a combination of inventories have a better generalization performance, with a bias towards high precision and low recall scores. In our tests, the combined training model achieved the highest (Matthews correlation coefficient) MCC score of 0.69 when mapping landslides in new unseen regions. The mapping was done on images from different optical sensors, resampled to a spatial resolution of 6 m, 10 m, and 30 m. Despite a slightly reduced performance, the main advantage of combined training is to overcome the requirement of a local inventory for training a new deep learning model. This implementation can facilitate automated pipelines providing fast response for the generation of landslide maps in the post-disaster phase. In this study, the study areas were selected from seismically active zones with a high hydrological hazard distribution and vegetation coverage. Hence, future works should also include regions from less vegetated geographic locations.

Highlights

  • IntroductionDifferent variants of U-Net have been used in multiple studies to map landslides from optical images in different geographical ­regions[22,23,24]

  • Many deep learning implementations to map landslides have been proposed in recent years, which typically use a convolutional neural network (CNN) to learn directly from EO ­data[17,18,19,20,21]

  • Ghorbanzadeh et al.[17] used RapidEye images combined with topographic information derived from the 5 m ALOS digital elevation model (DEM) to train a CNN for detecting landslides

Read more

Summary

Introduction

Different variants of U-Net have been used in multiple studies to map landslides from optical images in different geographical ­regions[22,23,24] These studies adopt a conventional supervised learning workflow where a model is first trained in a controlled region ( known as the training region) and reused to generate a landslide map of its surroundings with comparable geo-environmental characteristics. Machine learning algorithms trained and validated using the existing approaches cannot be adopted for fully autonomous mapping of the new landslides Such approaches require a local landslide inventory created for the event to train the deep learning models. We have used a deep network architecture, a large training dataset from multiple sensors and landslide inventories, and strong data-augmentation to make the CNN learn features to map landslides generated by an unseen/future triggering event. The term ‘inventory’ will refer to a catalogue of landslides induced by one trigger

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.