Abstract

Abstract Soil moisture datasets vary greatly with respect to their time series variability and signal-to-noise characteristics. Minimizing differences in signal variances is particularly important in data assimilation to optimize the accuracy of the analysis obtained after merging model and observation datasets. Strategies that reduce these differences are typically based on rescaling the observation time series to match the model. As a result, the impact of the relative accuracy of the model reference dataset is often neglected. In this study, the impacts of the relative accuracies of model- and observation-based soil moisture time series—for seasonal and subseasonal (anomaly) components, respectively—on optimal model–observation integration are investigated. Experiments are performed using both well-controlled synthetic and real data test beds. Investigated experiments are based on rescaling observations to a model using strategies with decreasing aggressiveness: 1) using the seasonality of the model directly while matching the variance of the observed anomaly component, 2) rescaling the seasonality and the anomaly components separately, and 3) rescaling the entire time series as one piece or for each monthly climatology. All experiments use a simple antecedent precipitation index model and assimilate observations via a Kalman filtering approach. Synthetic and real data assimilation results demonstrate that rescaling observations more aggressively to the model is favorable when the model is more skillful than observations; however, rescaling observations more aggressively to the model can degrade the Kalman filter analysis if observations are relatively more accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call