Abstract

Nowadays, blind image quality assessment (BIQA) has been intensively studied with machine learning, such as support vector machine (SVM) and k-means. Existing BIQA metrics, however, do not perform robust for various kinds of distortion types. We believe this problem is because those frequently used traditional machine learning techniques exploit shallow architectures, which only contain one single layer of nonlinear feature transformation, and thus cannot highly mimic the mechanism of human visual perception to image quality. The recent advance of deep neural network (DNN) can help to solve this problem, since the DNN is found to better capture the essential attributes of images. We in this paper therefore introduce a new Deep learning based Image Quality Index (DIQI) for blind quality assessment. Extensive studies are conducted on the new TID2013 database and confirm the effectiveness of our DIQI relative to classical full-reference and state-of-the-art reduced- and no-reference IQA approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call