Abstract
In order to overcome the lack of the multispectral image (MS) and adequately preserve the spatial information of panchromatic (PAN) image and the spectral information of MS image, this study proposes a method which adds the spectral information of the prior MS to the prior PAN during training, and only the posterior PAN is needed for predicting. Firstly, we introduce the autoencoder model based on image colorization and discuss its feasibility in the field of multi-band remote sensing image pan-sharpening. Then, the image quality evaluation functions including spatial and spectral indexes are formed as the loss function to control the image colorization model. Because the loss function contains spatial and spectral evaluation indexes, it could directly calculate the loss between the network output and the label considering characteristics of remote sensing images. Besides, the training data in our model is original PAN, this means that it is not necessary to make the simulated degraded MS and PAN data for training which is a big difference from most existing deep learning pan-sharpening methods. The new loss function including the spectral and spatial quality instead of the general MSE (mean square error), only the original PAN instead of the simulated degraded MS + PAN to be inputted, only the spectral feature instead of the direct fusion result to be learned, these three aspects change the current learning framework and optimization rule of deep learning pan-sharpening. Finally, thousands of remote sensing images from different scenes are adopted to make the training dataset to verify the effectiveness of the proposed method. In addition, we selected seven representative pan-sharpening algorithms and four widely recognized objective fusion metrics to evaluate and compare the performance on the WorldView-2 experimental data. The results show that the proposed method achieves optimal performance in terms of both the subjective visual effect and the object assessment.
Highlights
Pan-sharpening refers to the fusion of multi-spectral (MS) images with panchromatic (PAN) images to produce high spectral resolution and high spatial resolution images
In this paper, a novel pan-sharpening structure which is a variation of the normal gray image colorization model is proposed
Compared with the traditional gray image colorization, the biggest difference is that both the spectral and the spatial quality evaluation functions are simultaneously introduced as the loss function, which changes the learning target and the learning framework based on the original data
Summary
Pan-sharpening refers to the fusion of multi-spectral (MS) images with panchromatic (PAN) images to produce high spectral resolution and high spatial resolution images. The fused results with small spectral distortion can be obtained by these training methods, but there are still some problems not considered First of all, they calculate the loss with the network output of simulated training image rather than the original training image, which may ignore some characteristics of the original images. Quality evaluation function and the spectral quality evaluation function as the loss function to control the spatial and spectral quality of the network output at the same time In this way, our proposed PAN image colorization method uses the original PAN as the network input, and the concatenated up-sampled MS and original PAN as labels. In the field of image colorization, the creation model always adopts the autoencoder network In such a network, the encoder will further extract high-level features from the input by passing through each down sampling layer. Since the best values of Q and sam are the maximum and the minimum of the value range, respectively, in order to make its value direction tend to be the same, we multiply Q by minus one
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.