Abstract

Convolutional neural networks (CNNs) have achieved remarkable results in multispectral (MS) and panchromatic (PAN) image fusion (pansharpening) because of their strong image feature extraction ability. However, previous CNN-based pansharpening methods mostly use an ordinary convolution, which has a small receptive field in the convolution layer, has insufficient contextual information, and can only extract shallow features, which is not conducive to learning the complex nonlinear mapping relationship between the input image and the fused image. Therefore, this study proposes a pansharpening algorithm based on a multiscale densely convolutional neural network (MDCNN). First, a two-stream network is used for feature extraction, with two convolution layers to extract spectral information from MS images. The multiscale convolutional feature extraction module is designed to extract the spatial detail features of the PAN images. Second, the proposed multiscale densely connected modules and residual modules are used as the backbone of the fusion network. Finally, the deep features generated are reconstructed, and spectral mapping is used to retain spectral information to obtain a high-resolution fusion image. Experimental results using three satellite image datasets show that the proposed algorithm generates high-quality fusion images, and it outperforms most advanced pansharpening methods in subjective visual and objective evaluation indexes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call