Abstract

Abstract. In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office's 24-member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model's parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits "jumpiness" in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.

Highlights

  • Errors in forecasts originate from a number of sources, namely the initial conditions, the boundary conditions and the model formulation

  • We focus on the effect of model error resulting from the parameterisation of unresolved processes; microphysics and turbulent boundary layer processes

  • To evaluate the spread of the ensemble we use two different diagnostics: the first is a domain-averaged measure of the ensemble spread at each time; the second, for the hourly rain accumulation only, is the correspondence ratio (CR), which gives a measure of spread for a whole field rather than individual grid points

Read more

Summary

Introduction

Errors in forecasts originate from a number of sources, namely the initial conditions, the boundary conditions and the model formulation. In synoptic scale forecasts of lead times up to a day, it is thought that the first two sources dominate. At convective scale model errors are thought to become more important, especially for relatively short range forecasts. We investigate a proposed representation of model error that can influence the forecast skill at convective scale. In this study we modify the socalled Random Parameters (RP) scheme, used in MOGREPS (Bowler et al, 2008), by applying changes designed to make it appropriate for use in a convective-scale ensemble

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call