Abstract

ABSTRACT As the most commonly used driven data for gross primary productivity (GPP) estimation, satellite remote sensing vegetation indexes (VI), such as the leaf area index (LAI), often seriously suffer from data quality problems induced by cloud contamination and noise. Although various filtering methods are applied to reconstruct the missing data and eliminate noises in the VI time series, the impacts of these data quality problems on GPP estimation are still not clear. In this study, the accuracy differences of the GPP estimations driven by different VI series are comprehensively analyzed based on two light use efficiency (LUE) models (the big-leaf MOD17 and the two-leaf RTL-LUE). Four VI filtering methods are applied for comparison, and GPP data across 169 eddy covariance (EC) sites are used for validation. The results demonstrate that all the filtering methods can improve the GPP simulation accuracy, and the SeasonL1 filtering method exhibits the best performance both for the MOD17 model (∆R2 = 0.06) and the RTL-LUE model (∆R2 = 0.07). The reconstruction of the key change points in the temporally continuous gaps may be the primary reason for the different performance of the four methods. Moreover, the effects of filtering processes on GPP estimation vary with latitudes and seasons due to the differences in the primary data quality. More significant improvements can be observed during the growing season and in the regions near the equator, where the data quality is relatively poor with lower primary GPP estimation accuracy. This study can guide the preprocessing of the VI data before GPP estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call