Abstract

Nowadays, most existing blind image quality assessment (BIQA) models inthewild heavily rely on human ratings, which are extraordinarily labor-expensive to collect. Here, we propose an opinion−free BIQA method that learns from multiple annotators to assess the perceptual quality of images captured in the wild. Specifically, we first synthesize distorted images based on the pristine counterparts. We then randomly assemble a set of image pairs from the synthetic images, and use a group of IQA models to assign pseudo-binary labels for each pair indicating which image has higher quality as the supervisory signal. Based on the newly established pseudo-labeled dataset, we train a deep neural network (DNN)-based BIQA model to rank the perceptual quality, optimized for consistency with the binary rank labels. Since there exists domain shift, e.g., distortion shift and content shift, between the synthetic and in-the-wild images, we leverage two ways to alleviate this issue. First, the simulated distortions should be similar to authentic distortions as much as possible. Second, an unsupervised domain adaptation (UDA) module is further applied to encourage learning domain-invariant features between two domains. Extensive experiments demonstrate the effectiveness of our proposed opinion−free BIQA model, yielding SOTA performance in terms of correlation with human opinion scores, as well as gMAD competition. Our code is available at: https://github.com/wangzhihua520/OF_BIQA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call