Clouds and shadows often contaminate optical remote sensing images, resulting in missing information. Consequently, continuous spatiotemporal monitoring of the Earth’s surface requires the efficient removal of clouds and shadows. Unlike optical satellites, synthetic aperture radar (SAR) has active imaging capabilities in all weather conditions, supplying valuable supplementary information for reconstructing missing regions. Nevertheless, the reconstruction of high-fidelity cloud-free images based on SAR-optical data fusion remains challenging due to differences in imaging mechanisms and the considerable contamination from speckle noise inherent in SAR imagery. To solve the aforementioned challenges, this paper presents a novel hybrid dynamic residual self-attention network (HDRSA-Net), aiming to fully exploit the potential of SAR images in reconstructing missing regions. The proposed HDRSA-Net comprises multiple dynamic interaction residual (DIR) groups organized into an end-to-end trainable deep hierarchical stacked architecture. Specifically, the omni-dimensional dynamic local exploration (ODDLE) module and the sparse global context aggregation (SGCA) module are used to form a local–global feature adaptive extraction and implicit enhancement. A multi-task cooperative optimization loss function is designed to ensure that the results exhibit high spectral fidelity and coherent spatial structures. Additionally, this paper releases a large dataset that can comprehensively evaluate the reconstruction quality under different cloud coverages and various types of ground cover, providing a solid foundation for restoring satisfactory sensory effects and reliable semantic application value. In comparison to the current representative algorithms, the presented approach exhibits effectiveness and advancement in reconstructing missing regions with stability. The project is accessible at: https://github.com/RSIIPAC/LuojiaSET-OSFCR.
Read full abstract