Abstract

Most of no-reference image quality assessment (NR-IQA) techniques reported in literature have utilized transform coefficients, which are modeled using curve fitting to extract features based on natural scene statistics (NSS). The performance of NR-IQA techniques that utilize curve-fitting suffers from degradation in performance because the distribution of curve fitted NSS features deviate from the statistical distribution of a distorted image. Although deep convolutional neural networks (DCNNs) have been used for NR-IQA that are NSS model-independent but their performance is dependent upon the size of training data. The available datasets for NR-IQA are small, therefore data augmentation is used that affects the performance of DCNN based NR-IQA techniques and is also computationally expensive. This work proposes a new patch-based NR-IQA technique, which utilizes features extracted from discrete cosine transform coefficients. The proposed technique is curve fitting independent and helps in avoiding errors in the statistical distribution of NSS features. It relies on global statistics to estimate image quality based on local patches, which allow us to decompose the statistics of images. The proposed technique divides the image into patches and extracts nine handcrafted features i.e., entropy, mean, variance, skewness, kurtosis, mobility, band power, energy, complexity, and peak to peak value. The extracted features are used with a support vector regression model to predict the image quality score. The experimental results have shown that the proposed technique is database and image content-independent. It shows better performance over a majority of distortion types and on images taken in real-time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call