Abstract

The tone mapping operator (TMO) enables high dynamic range (HDR) images to be presented on low dynamic range (LDR) consumer electronic devices. However, the results obtained by this method are not always ideal due to the reduced number of bits. In comparison, the multi-exposure image fusion (MEF) bypasses the intermediate HDR image composition and directly produces an image presented on standard devices. Inspired by this, this paper proposes a quality assessment method for tone-mapped image (TMI) based on generating multi-exposure sequences. Specifically, the method uses a generative adversarial network (GAN) to generate a set of sequences with different exposure levels based on the TMIs. Then a two-branch convolutional neural network (CNN) is used to extract features from the tone-mapped images and the multi-exposure reference sequences, respectively. Finally, the transformer is used to mine the intrinsic connections between TMIs and multi-exposure sequences and learn the mapping relationships from feature space to quality space. We conducted extensive experiments on the ESPL-LIVE HDR database. The applicability and effectiveness of the proposed method are verified by comparing and analyzing relevant features and model configurations with existing mainstream evaluation algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call