Abstract

Deep learning, especially convolutional neural networks (CNNs), has shown very promising results for multispectral (MS) and hyperspectral (HS) image fusion (MS/HS fusion) task. Most of the existing CNN methods are based on “black-box” models that are not specifically designed for MS/HS fusion, which largely ignore the priors evidently possessed by the observed HS and MS images, and lack clear interpretability, leaving room for further improvement. In this paper, we propose an interpretable network, named as spatial-spectral dual-optimization model driven deep network (S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> DMDN), which embeds the intrinsic generation mechanism of the MS/HS fusion to the network. There are two key characteristics: (i) Explicitly encode the spatial prior and spectral prior evidently possessed by the input MS and HS images in the network architecture; (ii) Unfold an iterative spatial-spectral dual-optimization algorithm into a model driven deep network. The benefit is that the network has good interpretability and generalization capability, and the fused image is richer in semantics and more precise in spatial. Extensive experiments are conducted to prove the superiority of our proposed method over other state-of-the-art methods in terms of quantitative evaluation metrics and qualitative visual effects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call