Abstract

The effect of rainfall‐sampling errors on distributed hydrologic simulations was evaluated in a study conducted with localized thunderstorms and a midsized (150 km2) semiarid watershed. Rainfall fields based on observations from a very dense rain gage network were compared to rainfall fields based on observations from a subset of the original gages. The rain gage density of the “sparse” network (1 gage per 20 km2) was selected to represent the typical gage density of a local evaluation in real time (ALERT) type flash flood warning system. Inadequate rain gage densities in the case of the sparse network produced errors in simulated peaks that, on the average, represented 58% of the observed peak flow. Approximately half of the difference between observed and simulated peaks was due to rainfall‐sampling errors. Simulations were also conducted with rainfall that is similar to the next generation weather radar (NEXRAD) digital precipitation estimates in that it represents areal averages within 4 km × 4 km pixels. Spatial averaging of rainfall over 4 km × 4 km pixels led to consistent reductions in simulated peaks that, on the average, represented 50% of the observed peak flow. Hence it appears that the current spatial resolution of ALERT‐type precipitation measurements and 4 km × 4 km radar precipitation estimates may not be sufficient to produce reliable rainfall‐runoff simulations/forecasts in midsized watersheds of the southwestern United States subject to localized thunderstorms and large infiltration losses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call