Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite significant developments achieved by deep learning-based pansharpening methods over traditional approaches, most existing techniques either fail to account for the modal differences between LRMS and PAN images, relying on direct concatenation, or use similar network structures to extract spectral and spatial information. Additionally, many methods neglect the extraction of common features between LRMS and PAN images and lack network architectures specifically designed to extract spectral features. To address these limitations, this study proposed a novel three-branch pansharpening network that leverages both spatial and frequency domain interactions, resulting in improved spectral and spatial fidelity in the fusion outputs. The proposed method was validated on three datasets, including IKONOS, WorldView-3 (WV3), and WorldView-4 (WV4). The results demonstrate that the proposed method surpasses several leading techniques, achieving superior performance in both visual quality and quantitative metrics.
Read full abstract