Abstract

The fusion of panchromatic (PAN) and multispectral (MS) images aims to generate high-resolution multispectral images, also known as pansharpening. Among the pansharpening methods, deep learning-based methods have become the most popular solution. However, most deep learning-based methods have difficulty in making trade-offs between spectral and spatial information preservation as well as lack of interpretability. In this paper, we propose an interpretable deep neural network approach based on one observation model. Specifically, a hypothesis is proposed, which assumes that the high-resolution multispectral (HRMS) image is composed of the addition of the structural part and the spectral part, in which the spectral part is obtained by upsampling MS image. According to the observation model, we construct two variational models describing two type linear mapping relationship, one between HRMS and MS image, and another between HRMS and PAN image. The structural part is obtained by alternately solving the two variational models with the proximal gradient descent algorithm. Finally, We construct panchromatic and multispectral image fusion network (PMFNet) using deep unfolding method. PMFNet is interpretable with the iterative sovling steps of our proposed algorithm. Extensive experiments are conducted on GaoFen-2 and WorldView-2 datasets, which show that our proposed method is superior to the state of the art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call