Abstract

Fusion results of low-resolution multispectral (LRMS) images and high-resolution panchromatic (Pan) images, also called pan-sharpened images, are always difficult to evaluate due to the lack of high-resolution multispectral (HRMS) images and the complexity of the fusion process. By taking spectral information of LRMS images and spatial structural information of Pan images as references, we extract the saturation map and luminance value as spectral features, and construct the optimal contrast map and structure similarity map as spatial features to compute the four indices between the original LRMS, Pan images, and the fused result: saturation similarity, luminance consistency, contrast similarity, and structure similarity to describe distortions from different aspects. Then, we feed the four indices into an extreme learning machine to train a nonlinear pooling strategy, and finally a multifeature and learning-based model is constructed for fusion image quality assessment. Comparisons with state-of-the-art image quality assessment metrics show that the proposed metric gains a much higher consistency with subjective opinions while needing no reference HRMS images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call