Abstract

ABSTRACTThis paper presents the results of a forecast skill score comparison for the popular two semester weather forecast game played at University of Missouri, Columbia, MO, United States, for a total of 106 days, during the autumn 2006 and winter 2007 semesters. A relatively less experienced/first time student forecaster (SF) skill, based upon the funnel approach to weather forecasting, is compared with the then state‐of‐the‐art mesoscale operational numerical weather prediction (NWP) model outputs and the observations. Several measures of the forecast skills are employed to illustrate the intercomparison of the different aspects of performance of the game players for a conventional forecast game setup in an educational environment, after paying particular attention to the associated sampling uncertainty in the analysis of the forecast game. Bootstrap resampling based confidence intervals are computed and compared for the SF and NWP model forecasts and the observations to quantify the relative accuracies of the forecasts. The SF performed quite comparable to the NWP models, and better than the climatology and the persistence, for the next‐day forecasts of temperature and precipitation, when the forecast skill scores of the game players were averaged over the period of the forecast game. These observations support the judgement that the contribution of human forecasters, even in the contemporary era of progressively improving and fully automated NWP model outputs, remained a crucial one.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call