Abstract

Much effort goes into studying the causes of systematic errors in Earth System Models (ESMs). Reducing them is often seen as a high priority. Indeed, the development of Digital Twin approaches in climate research is founded on the idea that a sufficiently good model would be able to provide reliable and robust, conditional predictions of climate change (predictions conditioned on scenarios of future greenhouse gas emissions). Here, “reliable” encapsulates the idea that the predictions are suitable for use by society in anticipating and planning for future climate change, and “robust” encapsulates the idea that they are unlikely to change as the models are improved and developed. Such an approach, however, begs the question, when is a model sufficiently realistic to be able to provide reliable, detailed predictions? A physical processes view of current ESMs suggests that they are not close to this level of realism while a nonlinear dynamical systems perspective raises questions over whether it will ever be possible to achieve such reliability for the types of regionally-specific, extrapolatory, climate change predictions that we may think society seeks. Given this context, multi-model and perturbed-physics ensembles are often seen as a means to quantify uncertainty in conditional, climate change predictions (commonly referred to as “projections” in the scientific community). In the IPCC atlas (https://interactive-atlas.ipcc.ch/) the most easily accessible output is the multi-model median with the 10th, 25th, 75th and 90th percentiles of the multi-model distribution also prominent. This presentation in terms of probabilities implies that the probabilities themselves have meaning to the users of the data - most users are likely to take them as probabilities of different outcomes in reality. Unfortunately multi-model ensembles cannot be interpreted that way because we have no metric for the shape of model space nor any idea of how to explore it, so the ensemble members cannot be taken as independent samples of possible models. Perturbed-parameter ensembles work in a more defined space of possible model-versions but the shape of that space is also undefined and as a result the ensemble-based probabilities are again arbitrary. When seeking the best possible information for society, multi-model and perturbed physics ensembles would benefit from targeting diversity: the greatest possible range of responses given a particular model structure. Model emulators could be used to systematise this process. Such an approach would provide more reliable information. It changes the question, however, from “when is a model sufficiently realistic” to “how unrealistic does a model have to be to be uninformative about extrapolatory future climatic behaviour?” In this presentation I will discuss and elaborate on these issues.  

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call