Abstract

Land surface temperature (LST) is one of the main factors in the physical processes of land surface energy and water balance at the global scale. The importance of LST is being progressively increased in a variety of water resource fields. However, the time series of LST observations regularly have missing values because of various reasons such as cloudiness. Missing value imputation is a technique to use a realistic value to estimate or replace the missing value. The objective of this study was to investigate and compare the potential of machine learning (ML) models including K-Nearest Neighbors (KNN), Support Vector Regression (SVR), Boosted Regression Trees (BRT), and Extreme Learning Machine (ELM) for spatiotemporal imputation of LST satellite images in New Mexico's Lower Rio Grande Valley (LRGV) where LST is a critical variable for water resource studies. The cross-validation approach was applied to obtain the optimized parameters for each model separately. The model comparison results showed that the SVR model is the most accurate model compared to other models from missing ratio 0.1 to 0.8 (0.24 < mean RMSE (°C) < 0.33). The BRT and ELM models showed the same performance level in general with almost the same mean CV-RMSE of 0.5°C. However, the ELM imputed maps showed noisy estimations particularly when the missing ratios were increased. The BRT model showed a better performance compared to ELM with a lower missing ratio from 0.1 to 0.5. The KNN model found to be the least reliable ML model for spatiotemporal imputation of LST among the all models with mean RMSE of 0.982°C. Also, blurring effects were more obvious in the KNN imputed maps than other models particularly in larger missing ratios. Overall, the findings confirmed the superiority of the SVR and BRT over the ELM and KNN for spatiotemporal imputations of LST.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call