Abstract

No-reference image quality assessment (NR-IQA) is a challenging task. It is a promising idea to design NR-IQA algorithms by mimicking how human visual system (HVS) works. The internal generative mechanism (IGM) indicates that HVS actively infers the primary content of an image for better understanding. Inspired by that, a novel NR-IQA method with active inference is proposed in this paper. First, a generative adversarial network (GAN) is proposed to predict the primary content of a distorted image, in which two IGM-inspired constraints are considered during the optimization. Next, based on the correlation between the distorted image and its primary content, different degradations (i.e., the content/distortion-/structure-dependency degradation) are measured simultaneously with a multi-stream convolutional neural network (CNN) for NR-IQA. Benefit from the primary content obtained from GAN and the multiple degradations measurement of CNN, our method achieves the state-of-the-art on five public IQA databases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call