Abstract

AbstractEnsemble sensitivity analysis (ESA) is a useful and computationally inexpensive tool for analyzing how features in the flow at early forecast times affect different relevant forecast features later in the forecast. Given the frequency of observations measured between model initialization times that remain unused, ensemble sensitivity may be used to increase predictability and forecast accuracy through an objective ensemble subsetting technique. This technique identifies ensemble members with the smallest errors in regions of high sensitivity to produce a smaller, more accurate ensemble subset. Ensemble subsets can significantly reduce synoptic-scale forecast errors, but applying this strategy to convective-scale forecasts presents additional challenges. Objective verification of the sensitivity-based ensemble subsetting technique is conducted for ensemble forecasts of 2–5-km updraft helicity (UH) and simulated reflectivity. Many degrees of freedom are varied to identify the lead times, subset sizes, forecast thresholds, and atmospheric predictors that provide most forecast benefit. Subsets vastly reduce error of UH forecasts in an idealized framework but tend to degrade fractions skill scores and reliability in a real-world framework. Results reveal this discrepancy is a result of verifying probabilistic UH forecasts with storm-report-based observations, which effectively dampens technique performance. The potential of ensemble subsetting and likely other postprocessing techniques is limited by tuning UH forecasts to predict severe reports. Additional diagnostic ideas to improve postprocessing tool optimization for convection-allowing models are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call