Abstract
ABSTRACTIn this paper we propose a new multispectral image fusion architecture. The proposed method includes two steps related to two neural networks. First the extracted spatial information, from panchromatic (Pan) image, is injected to upsampled multi-spectral (MS) image. In this step, the method employed a deep convolution neural network (DCNN) to estimate the spatial information of the MS image, according to multi-resolution analysis (MRA) scheme. This DCNN is trained by the low-spatial resolution version of Pan as an input, and by the spatial information as the target. This trained DCNN is called ‘Fusion network (FN)’. The FN, adaptively, estimates the spatial information of the MS images, and operates as an injection gain in the MRA scheme. In the second step, the spectral compensation is performed on the fused MS image. For this purpose, we used a novel loss function for this DCNN, to reduce the spectral distortion in the fused images, and simultaneously maintain the spatial information. This network is called ‘Spectral compensation network (SCN)’. Finally, the proposed method is compared to the several state-of-the-art methods on three datasets, using both full-reference and reduced reference criterion. The experimental results show that the proposed method can achieve competitive performance in both spatial and spectral information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.