Abstract
Landslides are well-known phenomena that cause significant changes to the relief of an area’s terrain, often causing damage to technical infrastructure and loss of life. One of the possible means of reducing the negative impact of landslides on people’s lives or property is to recognize areas that are prone to their occurrence. The most common approach to this problem is preparing landslide susceptibility maps. These can factor in the actual location of landslides or the causal relationship between landslides and selected environmental factors. Creating a classification of landslide-prone areas is a challenging task when landslide density is not high and the area of analysis is large. We prepared shallow 10 m × 10 m resolution landslide susceptibility maps of the Wiśnickie Foothills (Western Carpathians, Poland) using eleven different machine learning algorithms derived from the Python libraries Scikit-learn and Imbalanced-Learn. The analyzed area is characterized by a mean density of 3.4 surficial landslides (composed of soils and rocks) per km2. We also compared different approaches to imbalanced sets of data: Logistic Regression, Naive Bayes, Random Forest, AdaBoost, Bagging, ExtraTrees (Extremely Randomized Trees), Easy Ensemble, Balanced Bagging, Balanced Random Forest, RUSBoost and a hybrid model combining Random Under Sampler and Multi-layer Perceptron algorithms. The environmental factors (slope inclination and aspect, distance from rivers, lithology, soil type and permeability, groundwater table depth, profile and plan curvature, mean annual rainfall) were categorized and divided into training (70%) and testing (30%) sets. Accuracy, recall, G-mean and area under receiver operating curve (AUC) were used to validate the quality of the models. The results confirmed that algorithms based on decision tree classifiers are suitable for preparing landslide susceptibility maps. We also found that methods that generate random undersampling subsets (Easy Ensemble, Balanced Bagging, RUSBoost) and ensemble methods (Bagging, AdaBoost, Extra-Trees) both yield very similar test results to those that use full sets of data for training. Relatively high-quality results can also be obtained by integrating the Random Under Sampler algorithm with the Multi-layer Perceptron algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.