Abstract
Most modern satellites can provide two types of images: 1) panchromatic (PAN) image and 2) multispectral (MS) image. The former has high spatial resolution and low spectral resolution, while the latter has high spectral resolution and low spatial resolution. To obtain images with both high spectral and spatial resolution, pansharpening has emerged to fuse the spatial information of the PAN image and the spectral information of the MS image. However, most pansharpening methods fail to preserve spatial and spectral information simultaneously. In this article, we propose a framelet-based convolutional neural network (CNN) for pansharpening which makes it possible to pursue both high spectral and high spatial resolution. Our network consists of three subnetworks: 1) feature embedding net; 2) feature fusion net; and 3) framelet prediction net. Different from conventional CNN methods directly inferring high-resolution MS images, our approach learns to predict their framelet coefficients from available PAN and MS images. The introduction of multilevel feature aggregation and hybrid residual connection makes full use of spatial information of PAN image and spectral information of MS image. Quantitative and qualitative experiments at reduced- and full-resolution demonstrate that the proposed method achieves more appealing results than other state-of-the-art pansharpening methods. The source code and trained models are available at https://github.com/TingMAC/FrMLNet.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.