Abstract

Study regionHan River Basin, Shaanxi Province, China Study focusMachine Learning (ML) has emerged as a promising panacea for precise runoff forecasting. However, the dearth of interpretability in ML models engenders apprehensions regarding their veracity and credibility. The advent of Explainable Artificial Intelligence (XAI) has bestowed researchers with the ability to understand a deeper comprehension of their inner workings and ensuring the utmost transparency and reliability of predictions. In this pursuit, the Integrated Gradient (IG) method has been employed to conduct a interpretation analysis on the long short-term memory (LSTM) model, encompassing flood events and entire runoff forecasting. Lastly, we delve into an in-depth study of the uncertainty surrounding the interpretation results, analyzing it from three key dimensions: the method, the model, and the input features. New hydrological insights for the regionThe results show that: (1) the mechanism of runoff formation belongs to the recent rainfall, and the dominant cause of prediction results varies under different flood events; (2) the change in contribution value of input features is related to feature values, and the joint effect of two features will affect the contribution value; (3) different interpretation methods will have quantitative differences in interpretation results, and the interpretation method is affected by model parameters and important feature inputs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call