Abstract

Deep learning for pansharpening method has become a hot research topic in recent years due to the impressive performance, and the convolutional neural networks (CNN)-based pansharpening methods on Wald's protocol (i.e., the general adoption of the network learned at a coarser reduced resolution scale to the finer full resolution) have been dominating for a long time in this research area. However, the scale-invariant assumption may not be accurate enough to make full use of the spatial and spectral information of original panchromatic (PAN) and multispectral (MS) images at full resolution. In this paper, a Supervised-Unsupervised combined Fusion Network (SUFNet) for high-fidelity pansharpening is proposed to alleviate this problem. First, by comprehensively considering the robustness of the network with reference label images, a novel supervised network based on Wald's protocol is proposed by integrating the multiscale mechanisms, dilated convolution, and skip connection, termed SMDSNet. Then, an interesting Unsupervised Spatial-Spectral Compensation Network (USSCNet) without real high-spatial-resolution (HR) MS label image is proposed to enhance the spatial and spectral fidelity of the SMDSNet. The qualitative and quantitative results in reduced resolution and full resolution experiments on different satellite datasets demonstrate the competitive performance of the proposed method. Furthermore, the proposed USSCNet can be employed as a universal spatial-spectral compensation framework for other pansharpening methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call