Hydrologic models are robust tools for estimating key parameters in the management of water resources, including water inputs, storage, and pathway fluxes. The selection of process-based versus data-driven modeling structure is an important consideration, particularly as advancements in machine learning yield potential for improved model performance but at the cost of lacking physical analogues. Despite recent advancement, there exists an absence of cross-model comparison of the tradeoffs between process-based and data-driven model types in settings with varying hydrologic controls. In this study, we use physically-based (SWAT), conceptually-based (LUMP), and deep-learning (LSTM) models to simulate hydrologic pathway contributions for a fluvial watershed and a karst basin over a twenty-year period. We find that, while all models are satisfactory, the LSTM model outperformed both the SWAT and LUMP models in simulating total discharge and that the improved performance was more evident in the groundwater-dominated karst system than the surface-dominated fluvial stream. Further, the LSTM model was able to achieve this improved performance with only 10–25% of the observed time-series as training data. Regarding pathways, the LSTM model coupled with a recursive digital filter was able to successfully match the magnitude of process-based estimates of quick, intermediate, and slow flow contributions for both basins (ρ ranging from 0.58 to 0.71). However, the process-based models exhibited more realistic time-fractal scaling of hydrologic flow pathways compared to the LSTM model which, depending on project objectives, presents a potential drawback to the use of machine learning models for some hydrologic applications. This study demonstrates the utility and potential extraction of physical-analogues of LSTM modeling, which will be useful as deep learning approaches to hydrologic modeling become more prominent and modelers look for ways to infer physical information from data-driven predictions.
Read full abstract