Abstract

AbstractBlind image quality assessment(BIQA) is a challenging task due to the difficulties in extracting quality-aware features and modeling the relationship between the image features and the visual quality. Until now, most BIQA metrics available try to extract statistical features based on thenatural scene statistics(NSS) and build mapping from the features to the quality score using the supervised machine learning technique based on a large amount of labeled images. Although several promising metrics have been proposed based on the above methodology, there are two drawbacks of these algorithms. First, only the labeled images are adopted for machine learning. However, it has been proved that using unlabeled data in the training stage can improve the learning performance. In addition, these metrics try to learn a direct mapping from the features to the quality score. However, subjective quality evaluation would be rather a fuzzy process than a distinctive one. Equally, human beings tend to evaluate the quality of a given image by first judging the extents it belongs to “excellent,” “good,” “fair,” “bad,” and “poor,” and estimating the quality score subsequently, rather than directly giving an exact subjective quality score. To overcome the aforementioned problems, we propose a semi-supervised and fuzzy framework for blind image quality assessment, S2F2, in this paper. In the proposed framework, (1) we formulate the fuzzy process of subjective quality assessment by using fuzzy inference. Specially, we model the membership relation between the subjective quality score and the truth values it belongs to “excellent,” “good,” “fair,” “bad,” and “poor” using a Gaussian function, respectively; and (2) we introduce thesemi-supervised local linear embedding(SS-LLE) to learn the mapping function from the image features to the truth values using both the labeled and unlabeled images. In addition, we extract image features based on NSS since it has led to promising performances for image quality assessment. Experimental results on two benchmarking databases, i.e., the LIVE database II and the TID2008 database, demonstrate the effectiveness and promising performance of the proposed S2F2algorithm for BIQA.

Highlights

  • Image quality assessment (IQA) is a practical research project and has been attracting increasing attentions during the past decades due to the dramatic development of visual equipment, such as TVs, digital cameras, and mobile phones

  • Experiments and analysis To verify the effectiveness of the proposed S2F2-I and S2F2-II metrics, we test them on two benchmarking databases: the LIVE database II (Sheikh et al 2003) and the TID2008 database (Ponomarenko et al 2009)

  • The LIVE database II consists of 29 reference images and 779 distorted images that span various distortion types—JPEG2000 compression (JP2K), JPEG compression (JPEG), additive white Gaussian noise (WN), Gaussian blurring (Gblur), and fast fading (FF), along with the associated subjective human difference mean opinion scores (DMOS), which are representative of the perceived quality of the image

Read more

Summary

Introduction

Image quality assessment (IQA) is a practical research project and has been attracting increasing attentions during the past decades due to the dramatic development of visual equipment, such as TVs, digital cameras, and mobile phones. The quality of these equipments and the images we obtained by using these equipment affects the information perception of human beings. It is necessary to develop blind IQA (BIQA) algorithms to estimate the visual quality of these images to help us choose a better equipment or image. It is of great difficulty to develop effective BIQA metrics, especially universal BIQA (UBIQA) metrics which can work for various types of distortions

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.