Abstract

Convolution neural network (CNN) is an effective and popular deep learning method which automatically learns complicated non-linear mapping from original inputs to given labels or ground truth through a series of convolutional layers. This study focuses on detecting landslides from high-resolution optical satellite images using CNN-based methods, providing opportunities for recognizing latent landslides and updating large-scale landslide inventory with high accuracy and time efficiency. Considering the variety of landslides and complicated backgrounds, attention mechanisms originated from the human visual system are developed for boosting the CNN to extract more distinctive feature representations of landslides from backgrounds. As deep learning needs a large number of labeled data to train a learning model, we manually prepared a landslide dataset which is located in the Bijie city, China. In the dataset, 770 landslides, including rock falls, rock slides, and a few debris slides, were interpreted by geologists from the satellite images and digital elevation model (DEM) data and further checked by fieldwork. The landslide data was separated into a training set that trains the attention boosted CNN model and a testing set that evaluates the performance of the model with a ratio of 2:1. The experimental results showed that the best F1-score of landslide detection reached 96.62%. The results also proved that the performance of our spatial-channel attention mechanism was fairly over other recent attention mechanisms. Additionally, the effectiveness of predicting new potential landslides with high efficiency based on our dataset is demonstrated.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.