Landslide prediction is one of the complicated topics recognized by the global scientific community. The research on landslide susceptibility prediction is vitally important to mitigate and prevent landslide disasters. The instability and complexity of the landslide system can cause uncertainty in the prediction process and results. Although there are many types of models for landslide susceptibility prediction, they still do not have a unified theoretical basis or accuracy test standard. In the past, models were mainly subjectively selected and determined by researchers, but the selection of models based on subjective experience often led to more significant uncertainty in the prediction process and results. To improve the universality of the model and the reliability of the prediction accuracy, it is urgent to systematically summarize and analyze the performance of different models to reduce the impact of uncertain factors on the prediction results. For this purpose, this paper made extensive use of document analysis and data mining tools for the bibliometric and knowledge mapping analysis of 600 documents collected by two data platforms, Web of Science and Scopus, in the past 40 years. This study focused on the uncertainty analysis of four key research subfields (namely disaster-causing factors, prediction units, model space data sets, and prediction models), systematically summarized the difficulties and hotspots in the development of various landslide prediction models, discussed the main problems encountered in these four subfields, and put forward some suggestions to provide references for further improving the prediction accuracy of landslide disaster susceptibility.
Read full abstract