Abstract

Neural networks are increasingly used in the field of hydrology due to their properties of parsimony and universal approximation with regard to nonlinear systems. Nevertheless, as a result of the existence of noise and approximations in hydrological data, which are very significant in some cases, such systems are particularly sensitive to increased model complexity. This dilemma is known in machine learning as bias–variance and can be avoided by suitable regularization methods. Following a presentation of the bias–variance dilemma along with regularization methods such as cross-validation, early stopping and weight decay, an application is provided for simulating and forecasting karst aquifer outflows at the Lez site. The efficiency of this regularization process is thus demonstrated on a nonlinear, partially unknown basin. As a last step, results are presented over the most intense rainfall event found in the database, which allows assessing the capability of neural networks to generalize with rare or extreme events.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call