Abstract

No-reference/blind image quality assessment (BIQA) is designed to measure the image quality without any knowledge of the reference image. Most existing BIQA metrics employ natural scene statistics or learning based models, which have achieved great progress but there still remains much room for improvement. Cognitive researches indicate that natural images possess sparse structures which can be represented by a small number of descriptors. Considering the sparse property of images, we utilize the bag-of-words (BoW) model for image representation and propose a novel BIQA metric. By analyzing the effect of neighboring pixel number and quantization depth of local pattern on image content extraction, we adopt the local quantized pattern (LQP) to extract image feature descriptors. The codebook is constructed by clustering LQP based descriptors from a set of natural images instead of distorted images, which presents strong generalization ability. The BoW-based image feature representation is highly sensitive to various distortion types and levels. Experiments on three public databases verify the effectiveness of the proposed metric and indicate our method is highly consistent with human perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call