Abstract

Pan-sharpening aims to super-solve low-spatial resolution multiple spectral (MS) images with the guidance of high-resolution (HR) texture-rich panchromatic (PAN) images. Recently, deep-learning-based pan-sharpening approaches have dominated this field and achieved remarkable advancement. However, most promising algorithms are devised in one-way mapping and have not fully explored the mutual dependencies between PAN and MS modalities, thus impacting the model performance. To address this issue, we propose a novel information compensation and integration network for pan-sharpening by effective cross-modality joint learning in this work. First, the cross-central difference convolution is employed to explicitly extract the texture details of the PAN images. Second, we implement the compensation process by imitating the classical back-projection (BP) technique where the extracted PAN textures are employed to guide the intrinsic information learning of MS images iteratively. Subsequently, we devise the hierarchical transformer to integrate the comprehensive relations of stage-iteration information from spatial and temporal contexts. Extensive experiments over multiple satellite datasets demonstrate the superiority of our method to the existing state-of-the-art methods. The source code is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/manman1995/pansharpening</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call