Abstract

Filling gaps caused by thick cloud cover or sensor malfunctions has always posed a significant challenge in the preprocessing of optical remote sensing images. The concept of similar pixels, derived from spatial and temporal similarities in remote sensing scenes, has been widely embraced and extensively applied, leading to the development of various gap-filling models. However, in complex scenarios characterized by spatiotemporally heterogeneous surfaces and extensive missing areas, current models often produce noticeable noise-like artifacts, distorted spectral signatures, and unreliable spatial textures. To address this challenge, we propose a simple yet effective similar-pixel-based approach called progressive gap-filling through the cascading temporal and spatial framework (PGFCTS). Drawing inspiration from the negative correlation between spatial distances and the robustness of similar pixels, we employ a progressive gap-filling scheme to ensure that similar pixels are spatially close to the target pixel. This significantly enhances the effectiveness of similar pixels and improves the accuracy of the reconstruction model. Moreover, unlike traditional methods that integrate spatial and temporal information in parallel, our approach integrates these two sources in a cascading manner. Initially, a temporal model is used to obtain preliminary results with fine texture details, followed by a spatial model to enhance spectral fidelity. Through tests on two gap-filling missions and comparisons with seven classical methods, results demonstrate that PGFCTS effectively eliminates noise-like artifacts, faithfully restores spectral features, and accurately reproduces spatial details. Quantitative assessment reveals that PGFCTS consistently outperforms other methods, securing the best scores. Importantly, our method maintains its superiority over extended time intervals between the target and reference images. In summary, PGFCTS emerges as an effective solution for filling missing gaps and reproducing surface information, thereby enhancing the usability of optical images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.