Abstract

Blind image quality assessment (BIQA) is of fundamental importance in low-level computer vision community. Increasing interest has been drawn in exploiting deep neural networks for BIQA. Despite of the notable success achieved, there is a broad consensus that training deep convolutional neural networks (DCNN) heavily relies on massive annotated data. Unfortunately, BIQA is typically a small sample problem, resulting the generalization ability of BIQA severely restricted. In order to improve the accuracy and generalization ability of BIQA metrics, this work proposed a totally opinion-unaware BIQA in which no subjective annotations are involved in the training stage. Multiple full-reference image quality assessment (FR-IQA) metrics are employed to label the distorted image as a substitution of subjective quality annotation. A deep neural network (DNN) is trained to blindly predict the multiple FR-IQA score in absence of corresponding pristine image. In the end, a self-supervised FR-IQA score aggregator implemented by adversarial auto-encoder pools the predictions of multiple FR-IQA scores into the final quality predicting score. Even though none of subjective scores are involved in the training stage, experimental results indicate that our proposed full reference induced BIQA framework is as competitive as state-of-the-art BIQA metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.