Abstract

In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods that are based on convolutional neural networks (CNN) with different architectures have been introduced by prior works. However, different scale information of the source images is not considered by these methods, which may lead to the loss of high-frequency details in the fused image. This paper proposes a pansharpening method of MS images via multi-scale deep residual network (MSDRN). The proposed method constructs a multi-level network to make better use of the scale information of the source images. Moreover, residual learning is introduced into the network to further improve the ability of feature extraction and simplify the learning process. A series of experiments are conducted on the QuickBird and GeoEye-1 datasets. Experimental results demonstrate that the MSDRN achieves a superior or competitive fusion performance to the state-of-the-art methods in both visual evaluation and quantitative evaluation.

Highlights

  • The goal of pansharpening is to obtain a high spatial resolution MS (HRMS) image with the same spatial resolution as the PAN image, so it is desired that the spatial resolution of the fused image be as close as possible to that of the PAN image

  • Where h/l is the ratio of the spatial resolution between PAN and MS; N is the number of MS bands; RMSE( Bi ) is the root mean square error between the band of the fused image and the reference image, and μ( Bi ) is the average of the original MS image band Bi

  • Experimental results demonstrate that the progressive reconstruction scheme is beneficial to improve the quality of the fused image

Read more

Summary

Introduction

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. The features of MS and PAN images are first extracted, respectively, and the obtained features are merged to reconstruct the pansharpened image In these CNN-based methods, the source images are usually directly input to the trained network to obtain the output. This may not make full use of the detailed information in the source images, resulting in the loss of high-frequency details in the fused images.

Deep Learning and Convolution Neural Network
Residual Learning
CNN-Based Methods for Pansharpening
Motivation
The Architecture of Proposed Network
Training of Network
Datasets and Settings
Quality Indicators
Comparison Algorithms
The Influences of Scale Levels and Kernel Sizes
Quality indicator curves imagewith with network toand
Experiments
Real Data Experiments
An example ofof real the GeoEye-1
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call