Abstract

Pansharpening is a typical image fusion problem, which aims to produce a high resolution multispectral (HRMS) image by integrating a high spatial resolution panchromatic (PAN) image with a low spatial resolution multispectral (MS) image. Prior arts have used either component substitution (CS)-based methods or multiresolution analysis (MRA)-based methods for this propose. Although they are simple and easy to implement, they usually suffer from spatial or spectral distortions and could not fully exploit the spatial and/or spectral information existed in PAN and MS images. By considering their complementary performances and with the goal of combining their advantages, we propose a pansharpening weight network (PWNet) to adaptively average the fusion results obtained by different methods. The proposed PWNet works by learning adaptive weight maps for different CS-based and MRA-based methods through an end-to-end trainable neural network (NN). As a result, the proposed PWN inherits the data adaptability or flexibility of NN, while maintaining the advantages of traditional methods. Extensive experiments on data sets acquired by three different kinds of satellites demonstrate the superiority of the proposed PWNet and its competitiveness with the state-of-the-art methods.

Highlights

  • Due to technical limitations [1], current satellites, such as QuickBird, IKONOS, WorldView-2, GeoEye-1, can not obtain the high spatial resolution multispectral (MS) images, but only acquire an image pair with complementary features, i.e., a high spatial resolution panchromatic (PAN) image and a low spatial resolution MS image with rich spectral information

  • The representative methods belonging to the multiresolution analysis (MRA)-based class are high-pass filtering (HPF) [18], smoothing filter-based intensity modulation (SFIM) [19], the generalized Laplacian pyramid (GLP) [20,21], among many others [22,23]

  • It can be found that deep residual pan-sharpening neural network (DRPNN) is the most time-consuming method, because the number of hidden layers within DRPNN is more than the other convolution neural network (CNN)-based methods

Read more

Summary

Introduction

Due to technical limitations [1], current satellites, such as QuickBird, IKONOS, WorldView-2, GeoEye-1, can not obtain the high spatial resolution multispectral (MS) images, but only acquire an image pair with complementary features, i.e., a high spatial resolution panchromatic (PAN) image and a low spatial resolution MS image with rich spectral information. The CS-based and the MRA-based methods usually have complementary performances in improving the spatial quality of MS images while maintaining the corresponding spectral information. We propose a pansharpening weight network (PWNet) to bridge the classical methods (i.e., CS-based and MRA-based methods) and the learning-based methods (typically the CNN-based methods). PWNet uses the CS-based and MRA-based methods as inference modules and utilizes CNN to learn adaptive weight maps for weighting the results of the classical methods. Extensive experiments on three kinds of data sets have been conducted and shown that the fusion results obtained by PWNet achieve state-of-the-art performance compared with the CS-based, MRA-based methods and other CNN-based methods.

Related Work
The CS-Based Methods
The MRA-Based Methods
The Learning-Based Method
Motivation and Main Idea
Network Architecture
Data Sets and Implementation Details
Analysis to the Hyper-Parameters α
Impact of the Number of the CS-Based and MRA-Based Methods
Impact on the Number of Weight Map Channels
Method
Comparison with the CNN-Based Methods
Comparison at Full Resolution
Running Time Analysis
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.