Abstract

Rainfall-triggered landslides are well-known natural hazards that pose significant risks, and lot of effort has been invested to reduce the risk associated with this type of phenomenon. One approach to reduce such risk is the establishment of landslide early warning systems (LEWSs). LEWSs are designated to proactively identify conditions favorable to the initiation of landslides. When dealing with regional scale works, LEWSs are usually based on statistical methodologies to determine the minimum amount of rainfall required to trigger a landslide. This amount is often expressed in terms of minimum intensity or cumulative rainfall in a given time period. This research explores the use of artificial intelligence, specifically Long Short-Term Memory (LSTM) networks to analyze rainfall time series as either likely or not likely to result in a landslide. Various lengths of time series and different configurations of the model were tested to identify the best setting of the model. To develop the research, the selected test site was the Emilia-Romagna region in Italy, which has a robust landslide inventory, with assessed accuracy. Model performances were evaluated using several statistical indicators, including sensitivity (0.9), specificity (0.8), positive prediction power (0.82), negative prediction power (0.89), Efficiency (0.85) and misclassification rate (0.15). These results showed that the defined model correctly identified the rainfall conditions associated with landslide initiation with a high degree of accuracy and a low rate of false positives. In summary, this research demonstrates the potential of artificial intelligence, particularly LSTM networks, in improving the accuracy of LEWSs by analyzing rainfall time series data, ultimately enhancing our ability to predict and mitigate the risks of rainfall-triggered landslides.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call