Abstract

Deep learning based methods are the state-of-the-art in panchromatic (PAN)/multispectral (MS) fusion (which is generally called “pansharpening”). In this paper, to solve the problem of the insufficient spatial enhancement in most of the existing deep learning based pansharpening methods, we propose a novel pansharpening method based on a residual convolutional neural network (RCNN). Differing from the existing deep learning based pansharpening methods that are mainly devoted to designing an effective network, we make novel changes to the input and the output of the network and propose a simple but effective mapping strategy. This strategy involves utilizing the network to map the differential information between the high spatial resolution panchromatic (HR-PAN) image and the low spatial resolution multispectral (LR-MS) image to the differential information between the HR-PAN image and the high spatial resolution multispectral (HR-MS) image, which is called the “differential information mapping strategy”. Moreover, to further boost the spatial information in the fusion results, the proposed method makes full use of the LR-MS image and utilizes the gradient information of the up-sampled LR-MS image (Up-LR-MS) as auxiliary data to assist the network. Furthermore, an attention module and residual blocks are incorporated in the proposed network structure to maximize the ability of the network to extract features. Experiments on four data sets collected by different satellites confirm the superior performance of the proposed method compared to the state-of-the-art pansharpening methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call