Abstract

Abstract Reconstruction of the earth’s surface temperature from proxy data is an important task because of the need to compare recent changes with past variability. However, the statistical properties and robustness of climate reconstruction methods are not well known, which has led to a heated discussion about the quality of published reconstructions. In this paper a systematic study of the properties of reconstruction methods is presented. The methods include both direct hemispheric-mean reconstructions and field reconstructions, including reconstructions based on canonical regression and regularized expectation maximization algorithms. The study will be based on temperature fields where the target of the reconstructions is known. In particular, the focus will be on how well the reconstructions reproduce low-frequency variability, biases, and trends. A climate simulation from an ocean–atmosphere general circulation model of the period a.d. 1500–1999, including both natural and anthropogenic forcings, is used. However, reconstructions include a large element of stochasticity, and to draw robust statistical interferences, reconstructions of a large ensemble of realistic temperature fields are needed. To this end a novel technique has been developed to generate surrogate fields with the same temporal and spatial characteristics as the original surface temperature field from the climate model. Pseudoproxies are generated by degrading a number of gridbox time series. The number of pseudoproxies and the relation between the pseudoproxies and the underlying temperature field are determined realistically from Mann et al. It is found that all reconstruction methods contain a large element of stochasticity, and it is not possible to compare the methods and draw conclusions from a single or a few realizations. This means that very different results can be obtained using the same reconstruction method on different surrogate fields. This might explain some of the recently published divergent results. Also found is that the amplitude of the low-frequency variability in general is underestimated. All methods systematically give large biases and underestimate both trends and the amplitude of the low-frequency variability. The underestimation is typically 20%–50%. The shape of the low-frequency variability, however, is well reconstructed in general. Some potential in validating the methods on independent data is found. However, to gain information about the reconstructions’ ability to capture the preindustrial level it is necessary to consider the average level in the validation period and not the year-to-year correlations. The influence on the reconstructions of the number of proxies, the type of noise used to generate the proxies, the strength of the variability, as well as the effect of detrending the data prior to the calibration is also reported.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.