Abstract

Pansharpening tasks are the fusion of a low-resolution multispectral (LRMS) image with a high-resolution panchromatic (PAN) image to generate a high-resolution multispectral (HRMS) image. Recently, the pansharpening method based on deep-learning (DL) has received widespread attention because of its powerful fitting ability and efficient feature extraction. Since there is currently no method to make full use of different levels of feature information of PAN images to deeply fuse with MS images, we propose a new end-to-end deeply coupled feedback network to achieve high-quality image fusion at the feature level and this network named PSCF-Net. First, features are extracted from PAN images and MS images by different feature extraction blocks. Then, these features are deeply fused through two subnetworks composed of coupled feedback blocks, which can achieve high-quality fusion of features of different levels and images through coupling and feedback mechanisms. Finally, the feature maps of the two subnetworks are output as the final HRMS image through a channel integration layer. In order to make full use of the spatial information of PAN images and the spectral information of LRMS images, the extracted features include the features of MS images and the low- and high-level features of PAN images, and the low-level features of PAN images are injected with spectral information before being input to the subnetwork. At training time, we use SmoothL1 combine with Structural Similarity as the loss function in the network, and we experiment on IKONOS and WorldView-2 datasets, respectively. The experimental results of reduced- and full-scale show that the deeply coupled feedback network we propose is superior to some of the current popular traditional methods and DL-based methods. Source code is available at https://github.com/ahu-dsp/PSCF-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call