Abstract

Pansharpening is the process of fusing a low-resolution multispectral (LRMS) image with a high-resolution panchromatic (PAN) image. In the process of pansharpening, the LRMS image is often directly upsampled by a scale of 4, which may result in the loss of high-frequency details in the fused high-resolution multispectral (HRMS) image. To solve this problem, we put forward a novel progressive cascade deep residual network (PCDRN) with two residual subnetworks for pansharpening. The network adjusts the size of an MS image to the size of a PAN image twice and gradually fuses the LRMS image with the PAN image in a coarse-to-fine manner. To prevent an overly-smooth phenomenon and achieve high-quality fusion results, a multitask loss function is defined to train our network. Furthermore, to eliminate checkerboard artifacts in the fusion results, we employ a resize-convolution approach instead of transposed convolution for upsampling LRMS images. Experimental results on the Pléiades and WorldView-3 datasets prove that PCDRN exhibits superior performance compared to other popular pansharpening methods in terms of quantitative and visual assessments.

Highlights

  • Remote sensing satellites such as Pléiades, WorldView, and GeoEye provide low spatial resolution multispectral (LRMS) and high spatial resolution panchromatic (PAN) images

  • The results demonstrate that the PSNR value obtained by using the proposed mean squared error (MSE) + UIQI loss function is higher than that obtained by only using the MSE loss function

  • The results obtained by ATWT, GSA, MTF_GLP_CBD, MMMT, GS, ASIM, and DRPNN methods have some spectral distortions because they are oversharpened in the process of pansharpening

Read more

Summary

Introduction

Remote sensing satellites such as Pléiades, WorldView, and GeoEye provide low spatial resolution multispectral (LRMS) and high spatial resolution panchromatic (PAN) images. To obtain the high quality fused image, Wei et al [19] presented a deep residual network (ResNet) for pansharpening. In the above-mentioned pansharpening methods that are based on deep learning, the LRMS image is directly upsampled by a factor of 4 during the fusion process. This may result in loss of high-frequency details owing to the difficulty in learning nonlinear feature mapping. Different from direct upsampling by a factor of 4 during the fusion process in other pansharpening methods, we first adopt two upsampling operations and employ the two residual subnetworks to learn the nonlinear feature mapping from the source images to the ground truth in two scales. Compared with several existing pansharpening methods, the experimental results demonstrate that PCDRN achieves better performance

Residual Network
Universal Image Quality Index
Methods
Training Details
Compared Methods
Experiments on WorldView-3 Dataset
Experiments on Real Data
Experiments on Pléiades Dataset
Findings
Further Discussion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.