Abstract

Abstract Ambiguity is uncertainty in the prediction of forecast uncertainty, or in the forecast probability of a specific event, associated with random error in an ensemble forecast probability density function. In ensemble forecasting ambiguity arises from finite sampling and deficient simulation of the various sources of forecast uncertainty. This study introduces two practical methods of estimating ambiguity and demonstrates them on 5-day, 2-m temperature forecasts from the Japan Meteorological Agency’s Ensemble Prediction System. The first method uses the error characteristics of the calibrated ensemble as well as the ensemble spread to predict likely errors in forecast probability. The second method applies bootstrap resampling on the ensemble members to produce multiple likely values of forecast probability. Both methods include forecast calibration since ambiguity results from random and not systematic errors, which must be removed to reveal the ambiguity. Additionally, use of a more robust calibration technique (improving beyond just correcting average errors) is shown to reduce ambiguity. Validation using a low-order dynamical system reveals that both estimation methods have deficiencies but exhibit some skill, making them candidates for application to decision making—the subject of a companion paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call