Abstract

During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral bands. The main objective is fuse the low-resolution multispectral (MS) image and the high-spatial-resolution panchromatic (PAN) image to obtain a fused image having high spatial and spectral information. Recently, many artificial intelligence-based deep learning models have been designed to fuse the remote sensing images. But these models do not consider the inherent image distribution difference between MS and PAN images. Therefore, the obtained fused images may suffer from gradient and color distortion problems. To overcome these problems, in this paper, an efficient artificial intelligence-based deep transfer learning model is proposed. Inception-ResNet-v2 model is improved by using a color-aware perceptual loss (CPL). The obtained fused images are further improved by using gradient channel prior as a postprocessing step. Gradient channel prior is used to preserve the color and gradient information. Extensive experiments are carried out by considering the benchmark datasets. Performance analysis shows that the proposed model can efficiently preserve color and gradient information in the fused remote sensing images than the existing models.

Highlights

  • Fusion of multispectral (MS) and panchromatic (PAN) images has attracted researchers’ interest, since it results in a fused image with better spatial resolution and spectral information [1]. e spatial resolution of a MS image is significantly better as compared to a PAN image

  • It has been found that the existing models do not consider the inherent image distribution difference between MS and PAN images. erefore, the obtained fused images suffer from gradient and color distortion problems

  • Inception-ResNet-v2 model was improved by using a color-aware perceptual loss (CPL). e obtained fused images were further improved by using gradient channel prior as a postprocessing step

Read more

Summary

Introduction

Fusion of multispectral (MS) and panchromatic (PAN) images has attracted researchers’ interest, since it results in a fused image with better spatial resolution and spectral information [1]. e spatial resolution of a MS image is significantly better as compared to a PAN image. Ese models were quite simple and efficient and can produce high-spatial-quality images [5, 6] Non-linear combinations of MS bands are made to improve the performance [14] Most of these methods suffer from inadequate spatial texture improvement and spectral distortion issues. A four-layer CNN and a loss function were designed which can extract spatial and spectral characteristics efficiently from original image It did not require any refence fused image and did not need simulation data for training [18]. Erefore, the obtained fused images may suffer from gradient and color distortion problems To overcome these problems, in this paper, an efficient deep transfer learning model is proposed.

Literature Review
Proposed Model
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call