Abstract

ABSTRACT Several landslide susceptibility (LS) maps at various scales of analysis have been performed with specific zoning purposes and techniques. Supervised machine learning algorithms (ML) have become one of the most diffused techniques for landslide prediction, whose reliability is firmly based on the quality of input data. Site-specific landslide inventories are often more accurate and complete than national or worldwide databases. For these reasons, detailed landslide inventory and predisposing variables must be collected to derive reliable LS products. However, high-quality data are often rare, and risk managers must consider lower-resolution available products with no more than informative purposes. In this work, we compared different ML models to select the most accurate for large-scale LS assessment within the Municipality of Rome. The ExtraTreesClassifier outperformed the others reaching an average F1-score of 0.896. Thereafter, we addressed the reliability of open-source LS maps at different scales of analysis (global to regional) by means of statistical and spatial analysis. The obtained results shed light on the difference in hazard zoning depending on the scale and mapping unit. An approach for low-resolution LS data fusion was attempted, assessing the importance of the adopted criteria, which increased the ability to detect occurred landslides while maintaining precision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call