Abstract

The reliability of input data to be used within statistically based landslide susceptibility models usually determines the quality of the resulting maps. For very large territories, landslide susceptibility assessments are commonly built upon spatially incomplete and positionally inaccurate landslide information. The unavailability of flawless input data is contrasted by the need to identify landslide-prone terrain at such spatial scales. Instead of simply ignoring errors in the landslide data, we argue that modellers have to explicitly adopt their modelling design to avoid misleading results. This study examined different modelling strategies to reduce undesirable effects of error-prone landslide inventory data, namely systematic spatial incompleteness and positional inaccuracies. For this purpose, the Austrian territory with its abundant but heterogeneous landslide data was selected as a study site. Conventional modelling practices were compared with alternative modelling designs to elucidate whether an active counterbalancing of flawed landslide information can improve the modelling results. In this context, we compared widely applied logistic regression with an approach that allows minimizing the effects of heterogeneously complete landslide information (i.e. mixed-effects logistic regression). The challenge of positionally inaccurate landslide samples was tackled by elaborating and comparing the models for different terrain representations, namely grid cells, and slope units. The results showed that conventional logistic regression tended to reproduce incompleteness inherent in landslide training data in case the underlying model relied on explanatory variables directly related to the data bias. The adoption of a mixed-effects modelling approach appeared to reduce these undesired effects and led to geomorphologically more coherent spatial predictions. As a consequence of their larger spatial extent, the slope unit–based models were able to better cope with positional inaccuracies of the landslide data compared to their grid-based equals. The presented research demonstrates that in the context of very large area susceptibility modelling (i) ignoring flaws in available landslide data can lead to geomorphically incoherent results despite an apparent high statistical performance and that (ii) landslide data imperfections can actively be diminished by adjusting the research design according to the respective input data imperfections.

Highlights

  • In the last decades, there has been an increase in the reporting of landslide phenomena that caused damage or threatened society (Petley, 2012)

  • Low landslide densities were observed for the units Penninic window (Pw) and Bohemian Massif (Bm)

  • High conditional landslide frequencies were calculated for the land cover class pastures (P), broad-leaved forests (Bf) and mixed forests (Mf)

Read more

Summary

Introduction

There has been an increase in the reporting of landslide phenomena that caused damage or threatened society (Petley, 2012). Compared to physically based slope stability models, statistical landslide susceptibility analyses are more flexible in terms of input data and often applied for the assessment of large areas (Cascini, 2008; Corominas et al, 2014; Sabatakakis et al, 2013; van Westen et al, 2008). Based landslide susceptibility assessments are based on the assumption that the potential location of an upcoming slope failure can be estimated by analysing past landslides and their relation to spatial geoenvironmental variables. Several studies have shown that the explanatory power of statistically based spatial landslide predictions is dependent on the reliability of landslide training data (Ardizzone et al, 2002; Harp et al, 2011; Steger et al, 2016a; Zêzere et al, 2017)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call