Abstract

Bayesian Neural Networks (BNNs) have been shown as useful tools to analyze modeling uncertainty of Neural Networks (NNs). This research focuses on the comparison of two BNNs. The first BNNs (BNN-I) use statistical methods to describe the characteristics of different uncertainty sources (input, parameter, and model structure) and integrate these uncertainties into a Markov Chain Monte Carlo (MCMC) framework to estimate total uncertainty. The second BNNs (BNN-II) lump all uncertainties into a single error term (i.e. the residual between model prediction and measurement). In this study, we propose a simple BNN-II, which uses Genetic Algorithms (GA) and Bayesian Model Averaging (BMA) to calibrate Neural Networks with different structures (number of hidden units) and combine the predictions from different NNs to derive predictions and uncertainty estimation. We tested these two BNNs in two watersheds for daily and monthly hydrologic simulations. The BMA based BNNs (BNN-II) developed here outperforms BNN-I in the two watersheds in terms of both accurate prediction and uncertainty estimation. These results indicate that, given incomplete understanding of the characteristics associated with each uncertainty source and their interactions, the simple lumped error approach may yield better prediction and uncertainty estimation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.