Abstract

Pansharpening is about fusing a high spatial resolution panchromatic image with a simultaneously acquired multispectral image with lower spatial resolution. In this paper, we propose a Laplacian pyramid pansharpening network architecture for accurately fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral image, aiming at getting a higher spatial resolution multispectral image. The proposed architecture considers three aspects. First, we use the Laplacian pyramid method whose blur kernels are designed according to the sensors’ modulation transfer functions to separate the images into multiple scales for fully exploiting the crucial spatial information at different spatial scales. Second, we develop a fusion convolutional neural network (FCNN) for each scale, combining them to form the final multi-scale network architecture. Specifically, we use recursive layers for the FCNN to share parameters across and within pyramid levels, thus significantly reducing the network parameters. Third, a total loss consisting of multiple across-scale loss functions is employed for training, yielding higher accuracy. Extensive experimental results based on quantitative and qualitative assessments exploiting benchmarking datasets demonstrate that the proposed architecture outperforms state-of-the-art pansharpening methods. Code is available at https://github.com/ChengJin-git/LPPN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call