Abstract
Abstract. The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.
Highlights
The Normal Score Transform or Normal Quantile Transform (NQT) has been applied in various fields of geo-science in order to make the mostly asymmetrical distributed real world observed variables more treatable and to fulfil the basic underlying assumption of normality, which is intrinsic to most statistical models (e.g. Moran, 1970; Goovaerts, 1997)
The main objective of this study is to show the difficulties occurring in the inversion of the empirical NQT, if the normal random deviates lie outside the range of the historically observed range, which is important, if this happens during the forecast lead-time
In order to evaluate the dependence of the NQT on the sample size and its effect on the predictive uncertainty based on the meta Gaussian distribution, two different cases have been analysed with respect to (a) extreme values, (b) GAM based and (c) combined extreme values plus GAM based extrapolation
Summary
The Normal Score Transform or NQT has been applied in various fields of geo-science in order to make the mostly asymmetrical distributed real world observed variables more treatable and to fulfil the basic underlying assumption of normality, which is intrinsic to most statistical models (e.g. Moran, 1970; Goovaerts, 1997). For example the metaGaussian model is constructed by embedding the NQT of each variate into the Gaussian law (Kelly and Krzysztofowicz, 1997), which allows the marginal distribution functions of the variates to take any form and the dependence structure between any two variates to be monotone non-linear and heteroscedastic. This most convenient property has been incorporated into the HUP (Krzysztofowicz and Kelly, 2000; Krzysztofowicz and Herr, 2001; Krzysztofowicz and Maranzano, 2004), which is part of several operational forecasting systems The main objective of this study is to show the difficulties occurring in the inversion of the empirical NQT, if the normal random deviates lie outside the range of the historically observed range, which is important, if this happens during the forecast lead-time
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have