Abstract

Abstract. On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling – LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill – with no adjustable parameters – show excellent agreement with hindcasts, and these show some skill even on decadal scales. We also compare our forecast errors with those of several GCM experiments (with and without initialization) and with other stochastic forecasts, showing that even this simplest two parameter SLIMM is somewhat superior. In future, using a space–time (regionalized) generalization of SLIMM, we expect to be able to exploit the system memory more extensively and obtain even more realistic forecasts.

Highlights

  • Due to their sensitive dependence on initial conditions, the classical deterministic prediction limit of GCMs is about 10 days – the lifetime of planetarysized structures

  • Since macroweather temporal intermittency is low, we propose using the simplest model based on fractional Gaussian noise: the ScaLIng Macroweather Model (SLIMM)

  • While the difference in the value of βl might not seem significant, the linear inverse modelling (LIM) white noise value βl = 0 has no low-frequency predictability, whereas the actual values 0.2 < βl < 0.8 correspond to potentially enormous predictability. This basic feature of “long-range statistical dependency” has been regularly pointed out in the scaling literature and an attempt was already made to exploit it (Baillie and Chung, 2002b; see below), the actual extent of this enhanced predictability has not been quantified before now. This justifies the development of the new ScaLIng Macroweather Model (SLIMM) that we present below

Read more

Summary

Introduction

Due to their sensitive dependence on initial conditions, the classical deterministic prediction limit of GCMs (general circulation models) is about 10 days – the lifetime of planetarysized structures (τw). Lovejoy et al.: SLIMM: using scaling to forecast global-scale macroweather from months to decades For these longer scales, following Hasselmann (1976), the high-frequency weather can be considered as a noise driving an effectively stochastic low-frequency system; the separation of scales needed to justify such modelling is provided by the drastic transitions at τw, τow. Ever since Lovejoy and Schertzer (1986), there has been a growing literature (Koscielny-Bunde et al, 1998; Huybers and Curry, 2006; Blender et al, 2006; Franzke, 2012; Rypdal et al, 2013; Yuan et al, 2014, and see the extensive review in Lovejoy and Schertzer, 2013) showing that the temperature (and other atmospheric fields) are scaling at low frequencies, with spectra significantly different than those of Orenstein– Uhlenbeck processes, notably with βl in the range of 0.2– 0.8 with the corresponding low-frequency weather regime (at scales longer than τw ≈ 10 days) being referred to as “macroweather” (Lovejoy, 2013). With classical persistence the skill becomes negative for H < ≈ −0.2, so it is not shown over the whole range

Linear and nonlinear stochastic atmospheric models
From LIM to SLIMM
Definition and links to fBm
Spectrum and fluctuations
Using fGn to model and forecast the temperature
Forecasts
The data and the removal of anthropogenic effects
Estimating H from the residues
The numerical approach
The hindcasts
Hindcast skill
Hindcast correlations
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call