Abstract

Recently, deep unfolding networks (DUNs) have been applied to the fusion of low spatial resolution hyperspectral (LR HS) and high spatial resolution multispectral (HR MS) images and achieved satisfactory high spatial resolution hyperspectral (HR HS) images. However, the low-rank and sparse priors in these networks are not exploited sufficiently. In this paper, we establish an LR HS and HR MS image fusion model based on robust principal component analysis (RPCA), which simultaneously captures the low-rank and sparse properties in HS images. Then, the fusion model is optimized with the alternating direction method of multipliers (ADMM). To make full use of the representation capacity of DUNs, we unfold the derived ADMM algorithm as a network, named low-rank unfolding network (LRU-Net). Specifically, each iteration in ADMM is unfolded as one stage in LRU-Net, in which the low-rank and sparse priors are learned by singular value thresholding (SVT) and sparse module, respectively. Finally, all features from all stages are integrated to produce the desired HR HS image. Three benchmark datasets were chosen for comparison to demonstrate the effectiveness of the proposed LRU-Net. The experimental results demonstrate that LRU-Net performs better in terms of both qualitative and quantitative results compared to state-of-the-art fusion methods. The source code is publicly available at https://github.com/RSMagneto/LRU-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call