Quantifying image quality in the absence of a reference image continues to be a challenge despite the introduction of numerous no-reference image quality assessments (NR-IQA) in recent years. Unlike most existing NRIQA methods, this paper proposes an efficient NR-IQA method based on deep visual interpretations. Specifically, the main components of the proposed method are: i) generating a pseudo-reference image (PRI) for the input distorted images, ii) employing a pretrained convolutional network to extract feature maps from the distorted image and the corresponding PRI, iii) producing visual explanation images (VEIs) by using the feature maps of the distorted image and the corresponding PRI, iv) measuring the similarity between the two VEIs using an image similarity metric, and v) employing a non-linear mapping function for quality score alignment. In our experiments, we evaluated the efficacy of the proposed method across various forms of distortion using four benchmark datasets (LIVE, SIQAD, CSIQ, and TID2013). The proposed approach demonstrates parity with the latest methods, as evidenced by comparisons with both hand-crafted NR-IQA and deep learning-based approaches.
Read full abstract