Abstract

ABSTRACTIn this paper we propose a new multispectral image fusion architecture. The proposed method includes two steps related to two neural networks. First the extracted spatial information, from panchromatic (Pan) image, is injected to upsampled multi-spectral (MS) image. In this step, the method employed a deep convolution neural network (DCNN) to estimate the spatial information of the MS image, according to multi-resolution analysis (MRA) scheme. This DCNN is trained by the low-spatial resolution version of Pan as an input, and by the spatial information as the target. This trained DCNN is called ‘Fusion network (FN)’. The FN, adaptively, estimates the spatial information of the MS images, and operates as an injection gain in the MRA scheme. In the second step, the spectral compensation is performed on the fused MS image. For this purpose, we used a novel loss function for this DCNN, to reduce the spectral distortion in the fused images, and simultaneously maintain the spatial information. This network is called ‘Spectral compensation network (SCN)’. Finally, the proposed method is compared to the several state-of-the-art methods on three datasets, using both full-reference and reduced reference criterion. The experimental results show that the proposed method can achieve competitive performance in both spatial and spectral information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call