Machine learning (ML) models are extensively used in spatial predictive modeling, including landslide susceptibility prediction. The performance statistics of these models are vital for assessing their reliability, which is typically obtained using the random cross-validation (R-CV) method. However, R-CV has a major drawback, i.e., it ignores the spatial autocorrelation (SAC) inherent in spatial datasets when partitioning the training and testing sets. We assessed the impact of SAC at three crucial phases of ML modeling: hyperparameter tuning, performance evaluation, and learning curve analysis. As an alternative to R-CV, we used spatial cross-validation (S-CV). This method considers SAC when partitioning the training and testing subsets. This experiment was conducted on regional landslide susceptibility prediction using different ML models: logistic regression (LR), k-nearest neighbor (KNN), linear discriminant analysis (LDA), artificial neural networks (ANN), support vector machine (SVM), random forest (RF), and C5.0. The experimental results showed that R-CV often produces optimistic performance estimates, e.g., 6–18% higher than those obtained using the S-CV. R-CV also occasionally fails to reveal the true importance of the hyperparameters of models such as SVM and ANN. Additionally, R-CV falsely portrays a considerable improvement in model performance as the number of variables increases. However, this was not the case when the models were evaluated using S-CV. The impact of SAC was more noticeable in complex models such as SVM, RF, and C5.0 (except for ANN) than in simple models such as LDA and LR (except for KNN). Overall, we recommend S-CV over R-CV for a reliable assessment of ML model performance in large-scale LSM.
Read full abstract