AbstractMachine learning (ML) has been extensively applied in various disciplines. However, not much attention has been paid to data heterogeneity in databases and number of samples used to train ML models in hydrology. In this study, we addressed these issues and their impacts on the accuracy and reliability of ML models in the estimation of saturated hydraulic conductivity, Ks. We selected 17,990 soil samples from the USKSAT database and created random subsets N = 2,000, 4,000, 6,000, 8,000, 10,000, 12,000, 14,000, 16,000, and 17,990, 80% of which were used for training. The random subset selection was repeated 50 times. The extreme gradient boosting (XGBoost) algorithm was used to estimate Ks from other soil properties, such as bulk density, soil depth, texture, and organic content. For each subset, we conducted the learning curve analysis on the training and cross‐validation data sets. Results showed that for all training sample sizes the number of samples was not enough for the training and cross‐validation curves to reach a plateau. We also applied the concept of representative elementary volume by plotting the average coefficient of determination, R2, and root mean square log‐transformed error, RMSLE, against the training sample size. For the testing data set, as the number of training sample size increased from 1,600 to 14,392 the average R2 value increased from 0.74 to 0.90, while the average RMSLE value decreased from 1.08 to 0.69. Either the learning curve or representative sample size analysis is required to investigate whether the number of samples is enough or not.
Read full abstract