Abstract

Current top-performing blind image quality assessment (IQA) models rely on benchmark databases comprising of singly distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. Furthermore, the underlying image features of these models are often extracted from the achromatic luminance channel and could sometimes fail to account for the loss of their perceived quality that might potentially be distinctly captured in a different image modality. In this work, we propose a novel IQA model that focuses on the natural scene statistics of images afflicted with complex mixtures of unknown, authentic distortions. We derive several feature maps in different perceptually relevant color spaces and extract a large number of image features from them. We demonstrate the remarkable competence of our features in improving the automatic perceptual quality prediction on images containing both synthetic and authentic distortions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call