Abstract

Simulation models are widely employed to make probability forecasts of future conditions on seasonal to annual lead times. Added value in such forecasts is reflected in the information they add, either to purely empirical statistical models or to simpler simulation models. An evaluation of seasonal probability forecasts from the Development of a European Multimodel Ensemble system for seasonal to inTERannual prediction (DEMETER) and ENSEMBLES multi‐model ensemble experiments is presented. Two particular regions are considered: Nino3.4 in the Pacific and the Main Development Region in the Atlantic; these regions were chosen before any spatial distribution of skill was examined. The ENSEMBLES models are found to have skill against the climatological distribution on seasonal time‐scales. For models in ENSEMBLES that have a clearly defined predecessor model in DEMETER, the improvement from DEMETER to ENSEMBLES is discussed. Due to the long lead times of the forecasts and the evolution of observation technology, the forecast‐outcome archive for seasonal forecast evaluation is small; arguably, evaluation data for seasonal forecasting will always be precious. Issues of information contamination from in‐sample evaluation are discussed and impacts (both positive and negative) of variations in cross‐validation protocol are demonstrated. Other difficulties due to the small forecast‐outcome archive are identified. The claim that the multi‐model ensemble provides a ‘better’ probability forecast than the best single model is examined and challenged. Significant forecast information beyond the climatological distribution is also demonstrated in a persistence probability forecast. The ENSEMBLES probability forecasts add significantly more information to empirical probability forecasts on seasonal time‐scales than on decadal scales. Current operational forecasts might be enhanced by melding information from both simulation models and empirical models. Simulation models based on physical principles are sometimes expected, in principle, to outperform empirical models; direct comparison of their forecast skill provides information on progress toward that goal.

Highlights

  • Skillful probabilistic forecasting of seasonal weather and climate statistics would be of value in many fields, including agriculture, health and insurance

  • The current generation of seasonal forecasts will retire before the forecast-outcome archive grows significantly larger: seasonal verification data are precious! This complicates forecast calibration, as evaluation must be performed using crossvalidation with only a small sample

  • Probabilistic seasonal forecasts based on the ENSEMBLES stream II experiment demonstrate increased skill in forecasting sea-surface temperatures in the Nino3.4 region over that of the DEMETER model simulations

Read more

Summary

Introduction

Skillful probabilistic forecasting of seasonal weather and climate statistics would be of value in many fields, including agriculture, health and insurance. The multi-model ensemble simulations from these projects provide a basis for the quantification of skill in GCM forecasts and an opportunity to assess the benefit of using multi-model ensembles (Weisheimer et al, 2009; Alessandri et al, 2011) over other approaches, such as forecasts based on statistical models (Smith, 1992; van Oldenborgh, 2005; Coelho et al, 2006; Van Den Dool, 2007; Suckling and Smith, 2013).

The seasonal multi-model ENSEMBLES forecasts
Defining probabilistic forecast skill
ENSEMBLES seasonal forecast skill
Contrasting skill of ENSEMBLES and DEMETER
Contrasting ENSEMBLES seasonal skill with persistence forecasts
More models or more members?
The importance of being proper
Multiple models ensembles when data are precious
10. Establishing skill when data are precious
Findings
11. Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call