Abstract

Optimal fingerprinting is a standard method for detecting climate changes. Among the uncertainties taken into account by this method, one is the fact that the response to climate forcing is not known exactly, but in practice is estimated from ensemble averages of model simulations. This uncertainty can be taken into account using an Error-in-Variables model (or equivalently, the Total Least Squares method), and can be expressed through confidence intervals. Unfortunately, the predominant paradigm (likelihood ratio theory) for deriving confidence intervals is not guaranteed to work because the number of parameters that are estimated in the Error-in-Variables model grows with the number of observations. This paper discusses various methods for deriving confidence intervals and shows that the widely-used intervals proposed in the seminal paper by Allen and Stott are effectively equivalent to bias-corrected intervals from likelihood ratio theory. A new, computationally simpler, method for computing these intervals is derived. Nevertheless, these confidence intervals are incorrect in the “weak-signal regime”. This conclusion does not necessarily invalidate previous detection and attribution studies because many such studies lie in the strong-signal regime, for which standard methods give correct confidence intervals. A new diagnostic is introduced to check whether or not a data set lies in the weak-signal regime. Finally, and most importantly, a bootstrap method is shown to give correct confidence intervals in both strong- and weak-signal regimes, and always produces finite confidence intervals, in contrast to the likelihood ratio method which can give unbounded intervals that do not match the actual uncertainty.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call