A material consequence of climate change is the intensification of extreme precipitation in most regions across the globe. The respective trend signal is already detectable at global to regional scales, but long-term variability still dominates local observational records, which are the basis for extreme precipitation risk assessment. Whether the frequency of extreme events is purely random or subject to a low-frequency internal variability forcing is therefore highly relevant for modelling the expected number of extreme events in a typical observational record. Based on millennial climate simulations, we show that long-term variability is largely random, with no clear indication of low-frequency decadal to multidecadal variability. Nevertheless, extreme precipitation events occur highly irregularly, with potential clustering (11% probability of five or more 100-year events in 250 years) or long disaster gaps with no events (8% probability for no 100-year events in 250 years). Even for decadal precipitation records, a complete absence of any tail events is not unlikely, as, for example, in typical 70-year observational or reanalysis data, the probability is almost 50%. This generally causes return levels – a key metric for infrastructure codes or insurance pricing – to be underestimated. We also evaluate the potential of employing information across neighbouring locations, which substantially improves the estimation of return levels by increasing the robustness against potential adverse effects of long-term internal variability. The irregular occurrence of events makes it challenging to estimate return periods for planning and for extreme event attribution.
Read full abstract