Abstract
Fusion results of low-resolution multispectral (LRMS) images and high-resolution panchromatic (Pan) images, also called pan-sharpened images, are always difficult to evaluate due to the lack of high-resolution multispectral (HRMS) images and the complexity of the fusion process. By taking spectral information of LRMS images and spatial structural information of Pan images as references, we extract the saturation map and luminance value as spectral features, and construct the optimal contrast map and structure similarity map as spatial features to compute the four indices between the original LRMS, Pan images, and the fused result: saturation similarity, luminance consistency, contrast similarity, and structure similarity to describe distortions from different aspects. Then, we feed the four indices into an extreme learning machine to train a nonlinear pooling strategy, and finally a multifeature and learning-based model is constructed for fusion image quality assessment. Comparisons with state-of-the-art image quality assessment metrics show that the proposed metric gains a much higher consistency with subjective opinions while needing no reference HRMS images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.