Abstract

Compared with ordinary single exposure images, multi-exposure fusion (MEF) images are prone to color imbalance, detail information loss and abnormal exposure in the process of combining multiple images with different exposure levels. In this paper, we proposed a human visual perception-based multi-exposure fusion image quality assessment method by considering the related perceptual features (i.e., color, dense scale invariant feature transform (DSIFT) and exposure) to measure the quality degradation accurately, which is closely related to the symmetry principle in human eyes. Firstly, the L1 norm of chrominance components between fused images and the designed pseudo images with the most severe color attenuation is calculated to measure the global color degradation, and the color saturation similarity is added to eliminate the influence of color over-saturation. Secondly, a set of distorted images under different exposure levels with strong edge information of fused image is constructed through the structural transfer, thus DSIFT similarity and DSIFT saturation are computed to measure the local detail loss and enhancement, respectively. Thirdly, Gauss exposure function is used to detect the over-exposure or under-exposure areas, and the above perceptual features are aggregated with random forest to predict the final quality of fused image. Experimental results on a public MEF subjective assessment database show the superiority of the proposed method with the state-of-the-art image quality assessment models.

Highlights

  • Natural scenes usually have a wide brightness range from 10−5 cd/m2 to 108 cd/m2, but it is difficult for the existing imaging devices to acquire all parts of scene information at the single exposure situation due to the limitation of its own dynamic range [1]

  • Models, the experiments were performed on the public subjective assessment database provided by Waterloo IVC

  • The chrominance information, which is usually ignored in the existing IQA models for image fusion, is utilized to form the local color saturation similarity and global color distortion metric

Read more

Summary

Introduction

Natural scenes usually have a wide brightness range from 10−5 cd/m2 to 108 cd/m2 , but it is difficult for the existing imaging devices to acquire all parts of scene information at the single exposure situation due to the limitation of its own dynamic range [1]. Multi-exposure fusion (MEF), as an effective quality enhancement technology, is able to integrate multiple low dynamic range (LDR). The performance differences between several MEF algorithms are mainly reflected in the solving process of fusion weights. Mertens et al [4] constructed the weights by considering contrast, saturation and good exposure, and fused multiple images by the multi-scale pyramid model. On this basis, Li et al [5] made the fused image more realistic subjectively by solving the quadratic optimization problem to enhance the detail information. Different kinds of Symmetry 2019, 11, 1494; doi:10.3390/sym11121494 www.mdpi.com/journal/symmetry

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.