The availability of continuous spatiotemporal land surface temperature (LST) with high resolution is critical for many disciplines including hydrology, meteorology, ecology, and geology. Like other remote sensing data, satellite–based LST is also encountered with the cloud issue. In this research, over 5000 daytime and nighttime MODIS–LST images are utilized during 2014–2020 for Yazd–Ardakan plain in Yazd, Iran. The multi–channel singular spectrum analysis (MSSA) model is employed to reconstruct missing values due to dusts, clouds, and sensor defect. The selection of eigenvalues is based on the Monte Carlo test and the spectral analysis of eigenvalues. It is found that enlarging the window size has no effect on the number of significant components of the signal which account for the most variance of the data. However, data variance changes for all the three components. Employing two images per day, window sizes 60, 180, 360, and 720 are examined for reconstructing one year LST, where these selections are based on monthly, seasonal, semi-annual, and annual LST cycles, respectively. The results show that window size 60 had the least computational cost and the highest accuracy with RMSE (root mean square error) of 2.6 °C for the entire study region and 1.4 °C for a selected pixel. The gap–filling performance of MSSA is also compared with the one by the harmonic analysis of time series (HANTS) model, showing the superiority of MSSA with an improved RMSE of about 2.7 °C for the study region. In addition, daytime and nighttime LST series for different land covers are compared. Lastly, the maximum, minimum, and average LST for each day and night as well as average and standard deviation of LST images in the seven-year-long time series are also computed.
Read full abstract