Abstract

Pansharpening is a domain-specific task of satellite imagery processing, which aims at fusing a multispectral image with a corresponding panchromatic one to enhance the spatial resolution of multispectral image. Most existing traditional methods fuse multispectral and panchromatic images in linear manners, which greatly restrict the fusion accuracy. In this paper, we propose a highly efficient inference network to cope with pansharpening, which breaks the linear limitation of traditional methods. In the network, we adopt a dilated multilevel block coupled with a skip connection to perform local and overall compensation. By using dilated multilevel block, the proposed model can make full use of the extracted features and enlarge the receptive field without introducing extra computational burden. Experiment results reveal that our network tends to induce competitive even superior pansharpening performance compared with deeper models. As our network is shallow and trained with several techniques to prevent overfitting, our model is robust to the inconsistencies across different satellites.

Highlights

  • Motivated by the development of remote sensing technology, multiresolution imaging has been widely applied in civil and military fields

  • pansharpening by convolutional neural networks (PNN) [13] and Remote Sensing Image Fusion with Convolutional Neural Network (SRCNN+Gram Schmidt (GS)) [16] are pioneering convolutional neural network (CNN)-based methods for pansharpening, while the prototype of them is introduced from SRCNN [17], which is a noted single image super-resolution (SISR)

  • The learning phase of the CNN model was carried out on a graphics processing unit (GPU) (NVidia GTX1080Ti with CUDA 8.0) through the deep learning framework Caffe [31], and the test is performed with MATLAB R2016B configured with GPU

Read more

Summary

Introduction

Motivated by the development of remote sensing technology, multiresolution imaging has been widely applied in civil and military fields. MBO [9,10,11] is an alternative pansharpening approach to the aforementioned classes, where an objective function is built based on the degradation process of MS and PAN In this case, the fused image can be obtained via optimizing the loss function iteratively, which can be time-consuming. Compared with the previously discussed algorithms, these CNN-based methods significantly improve the pansharpening performance Those pansharpening models are trained on specific datasets with deep network architecture, and when generalized to different datasets, they tend to be less robust. As our network is shallow and trained with several domain-specific techniques to prevent overfitting, our model exhibits more robust fusion ability when generalized to new satellites This is not a common feature of other deep CNN approaches, since most of them are trained on specific datasets with deep networks, which lead to severe overfitting problem

Linear Models in Pansharpening
Convolution Neural Networks in Pansharpening
Dilated
Dilated Convolution
Experiment
Datasets
Loss Function
Training Details
Experimental
Results
Original
Generalization of previous reduced scale experiments
10.Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call