Abstract

ABSTRACT Remote sensing image (RSI) with concurrently high spatial, temporal, and spectral resolutions cannot be produced by a single sensor. Multisource RSI fusion is a convenient technique to realize high spatial resolution multispectral (MS) images (spatial spectral fusion, i.e. SSF) and high temporal and spatial resolution MS images (spatiotemporal fusion, i.e. STF). Currently, deep learning-based fusion models can only implement SSF or STF, lacking models that perform both SSF and STF. Multiresolution generative adversarial networks with bidirectional adaptive-stage progressive guided fusion (BAPGF) for RSI are proposed to implement both SSF and STF, namely BPF-MGAN. A bidirectional adaptive-stage feature extraction architecture in fine-scale-to-coarse-scale and coarse-scale-to-fine-scale modes is introduced. The designed BAPGF introduces a previous fusion result-oriented cross-stage-level dual-residual attention fusion strategy to enhance critical information and suppress superfluous information. Adaptive resolution U-shaped discriminators are implemented to feed multiresolution context into the generator. A generalized multitask loss function unlimited by no-reference images is developed to strengthen the model via constraints on the multiscale feature, structural, and content similarities. The BPF-MGAN model is validated on SSF datasets and STF datasets. Compared with the state-of-the-art SSF and STF models, results demonstrate the superior performance of the proposed BPF-MGAN model in both subjective and objective evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call