Abstract

Mesoscale model gridpoint temperature data from simulations in the southwestern United States during the summer of 1990 are compared with both observations and statistical guidance from large-scale models over a 32-day period. Although the raw model temperature data at the lowest sigma level typically are much lower than observed, when the mean temperature bias is removed, the model values of high temperature compare favorably with both observations and operational statistical guidance products. A simple 7-day running mean bias calculation that could be used in an operational environment is tested and also found to produce good results. These comparisons suggest that the ability of mesoscale model gridpoint data to produce useful and accurate forecast products through the use of very simple bias corrections should be explored fully as mesoscale model data become routinely available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call