Abstract

Abstract The performance of multimodel ensemble forecasting depends on the weights given to the different models of the ensemble in the postprocessing of the direct model forecasts. This paper compares the following different weighting methods with or without taking into account the single-model performance: equal weighting of models (EW), simple skill-based weighting (SW), using a simple model performance indicator, and weighting by Bayesian model averaging (BMA). These methods are tested for both short-range weather and seasonal temperature forecasts. The prototype seasonal multimodel ensemble is the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) system, with four different models and nine forecasts per model. The short-range multimodel prototype system is the European Meteorological Services (EUMETNET) Poor-Man’s Ensemble Prediction System (PEPS), with 14 models and one forecast per model. It is shown that despite the different forecast ranges and spatial scales, the impact of weighting is comparable for both forecast systems and is related to the same ensemble characteristics. In both cases the added value of ensemble forecasting over single-model forecasting increases considerably with the decreasing correlation of the models’ forecast errors, with a relation depending only on the number of models. Also, in both cases a larger spread in model performance increases the added value of combining model forecasts using the performance-based SW or BMA weighting instead of EW. Finally, the more complex BMA weighting adds value over SW only if the best model performs better than the ensemble with EW weighting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call