Abstract
AbstractTraditional pixel-versus-pixel forecast evaluation scores such as the critical success index (CSI) provide a simple way to compare the performances of different forecasts; however, they offer little information on how to improve a particular forecast. This paper strives to demonstrate what additional information an object-based forecast evaluation tool such as the Method for Object-Based Diagnostic Evaluation (MODE) can provide in terms of assessing numerical weather prediction models’ convective storm forecasts. Forecast storm attributes evaluated by MODE in this paper include storm size, intensity, orientation, aspect ratio, complexity, and number of storms. Three weeks of the High Resolution Rapid Refresh (HRRR) model’s precipitation forecasts during the summer of 2010 over the eastern two-thirds of the contiguous United States were evaluated as an example to demonstrate the methodology. It is found that the HRRR model was able to forecast convective storm characteristics rather well either as a function of time of day or as a function of storm size, although significant bias does exist, especially in terms of storm number and storm size. Another interesting finding is that the model’s ability of forecasting new storm initiation varies substantially by regions, probably as a result of its different skills in forecasting convection driven by different forcing mechanisms (i.e., diurnal heating vs synoptic-scale frontal systems).
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have