Abstract

This paper discusses methodological aspects of a monitoring process which focuses simultaneously on ensemble forecasts, surface variables, and high‐impact events. Which score(s) is (are) suitable for this task is a central question but not the only one to be answered. Here, we investigate the properties of the Brier score, logarithmic score, and diagonal elementary score in the context of forecast performance monitoring as well as the impact of methodological choices such as the event threshold definition, the reference forecast, and the role assigned to representativeness errors. A consistent picture of the verification process is eventually drawn where the design of the event climatology plays a key role. This study is illustrated by verification results for three surface variables (24 hr precipitation, 10 m wind speed, and 2 m temperature) over 15 years of operational ECMWF ensemble forecasting activities. Results are also compared with a current ECMWF headline score: the relative operating characteristic skill score for the Extreme Forecast Index.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.