Abstract

In recent decades, data-driven landslide susceptibility models (DdLSM), which are based on statistical or machine learning approaches, have become popular to estimate the relative spatial probability of landslide occurrence. The available literature is composed of a wealth of published studies and that has identified a large variety of challenges and innovations in this field. This review presents a comprehensive up-to-date overview focusing on the topic of DdLSM. This research begins with an introduction of the theoretical aspects of DdLSM research and is followed by an in-depth bibliometric analysis of 2585 publications. This analysis is based on the Web of Science, Clarivate Analytics database and provides insights into the transient characteristics and research trends within published spatial landslide assessments. Following the bibliometric analysis, a more detailed review of the most recent publications from 1985 to 2020 is given. A variety of different criteria are explored in detail, including research design, study area extent, inventory characteristics, classification algorithms, predictors utilized, and validation technique performed. This section, dealing with a quantitative-oriented review expands the time-frame of the review publication done by Reichenbach et al. in 2018 by also accounting for the four years, 2017–2020. The originality of this research is acknowledged by combining together: (a) a recap of important theoretical aspects of DdLSM; (b) a bibliometric analysis on the topic; (c) a quantitative-oriented review of relevant publications; and (d) a systematic summary of the findings, indicating important aspects and potential developments related to the DdLSM research topic. The results show that DdLSM are used within a wide range of applications with study area extents ranging from a few kilometers to national and even continental scales. In more than 70% of publications, a combination of the predictors, slope angle, aspect and geology are used. Simple classifiers, such as, logistic regression or approaches based on frequency ratio are still popular, despite the upcoming trend of applying machine learning algorithms. When analyzing validation techniques, 38% of the publications were not clear about the validation method used. Within the studies that included validation techniques, the AUROC was the most popular validation metric, being used accounting for 44% of the studies. Finally, it can be concluded that the application of new classification techniques is often cited as a main research scope, even though the most relevant innovation could also lie in tackling data-quality issues and research designs adaptations to fit the input data particularities in order to improve prediction quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call