Abstract

Blind image quality assessment (BIQA) has witnessed a flourishing progress due to the rapid advances in deep learning technique. The vast majority of prior BIQA methods try to leverage models pre-trained on ImageNet to mitigate the data shortage problem. These well-trained models, however, can be sub-optimal when applied to BIQA task that varies considerably from the image classification domain. To address this issue, we make the first attempt to leverage the plentiful unlabeled data to conduct self-supervised pre-training for BIQA task. Based on the distorted images generated from the high-quality samples using the designed distortion augmentation strategy, the proposed pre-training is implemented by a feature representation prediction task. Specifically, patch-wise feature representations corresponding to a certain grid are integrated to make prediction for the representation of the patch below it. The prediction quality is then evaluated using a contrastive loss to capture quality-aware information for BIQA task. Experimental results conducted on KADID-10 k and KonIQ-10 k databases demonstrate that the learned pre-trained model can significantly benefit the existing learning based IQA models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call