Abstract

Recurrent neural networks (RNNs) are a class of artificial neural networks capable of learning complicated nonlinear relationships and functions from a set of data. Catchment scale daily rainfall–runoff relationship is a nonlinear and sequential process that can potentially benefit from these intelligent algorithms. However, RNNs are perceived as being difficult to parameterize, thus translating into significant epistemic (lack of knowledge about a physical system) and aleatory (inherent randomness in a physical system) uncertainties in modeling. The current study investigates a variational Bayesian dropout (or Monte Carlo dropout (MC-dropout)) as a diagnostic approach to the RNNs evaluation that is able to learn a mapping function and account for data and model uncertainty. MC-dropout uncertainty technique is coupled with three different RNN networks, i.e. vanilla RNN, long short-term memory (LSTM), and gated recurrent unit (GRU) to approximate Bayesian inference in a deep Gaussian noise process and quantify both epistemic and aleatory uncertainties in daily rainfall–runoff simulation across a mixed urban and rural coastal catchment in North Carolina, USA. The variational Bayesian outcomes were then compared with the observed data as well as with a well-known Sacramento soil moisture accounting (SAC-SMA) model simulation results. Analysis suggested a considerable improvement in predictive log-likelihood using the MC-dropout technique with an inherent input data Gaussian noise term applied to the RNN layers to implicitly mitigate overfitting and simulate daily streamflow records. Our experiments on the three different RNN models across a broad range of simulation strategies demonstrated the superiority of LSTM and GRU approaches relative to the SAC-SMA conceptual hydrologic model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call