Currently, updating the change detection (CD) of land use/land cover (LU/LC) geospatial information with high accuracy outcomes is important and very confusing with the different classification methods, datasets, satellite images, and ancillary dataset types available. However, using just the low spatial resolution visible bands of the remotely sensed images will not provide good information with high accuracy. Remotely sensed thermal data contains very valuable information to monitor and investigate the CD of the LU/LC. So, it needs to involve the thermal datasets for better outcomes. Fusion plays a big role to map the CD. Therefore, this study aims to find out a refining method for estimating the accurate CD method of the LU/LC patterns by investigating the integration of the effectiveness of the thermal satellite data with visible datasets by (a) adopting a noise removal model, (b) satellite images resampling, (c) image fusion, combining and integrating between the visible and thermal images using the Grim Schmidt spectral (GS) method, (d) applying image classification using Mahalanobis distances (MH), Maximum likelihood (ML) and artificial neural network (ANN) classifiers on datasets captured from the Landsat-8 TIRS and OLI satellite system, these images were captured from operational land imager (OLI) and the thermal infrared (TIRS) sensors of 2015 and 2020 to generate about of twelve LC maps. (e) The comparison was made among all the twelve classifiers' results. The results reveal that adopting the ANN technique on the integrated images of the combined TIRS and OLI datasets has the highest accuracy compared to the rest of the applied image classification approaches. The obtained overall accuracy was 96.31% and 98.40%, and the kappa coefficients were (0.94) and (0.97) for the years 2015 and 2020, respectively. However, the ML classifier obtains better results compared to the MH approach. The image fusion and integration of the thermal images improve the accuracy results by 5%–6% from the proposed method better than using low spatial-resolution visible datasets alone. Doi: 10.28991/ESJ-2023-07-02-09 Full Text: PDF
Read full abstract