Abstract

No-reference image quality assessment (NR-IQA) based on deep learning attracts a great research attention recently. However, its performance in terms of accuracy and efficiency is still under exploring. To address these issues, in this paper, we propose a quality-distinguishing and patch-comparing NR-IQA approach based on convolutional neural network (QDPC-CNN). We improve the prediction accuracy by two proposed mechanisms: quality-distinguishing adaption and patch-comparing regression. The former trains multiple models from different subsets of a dataset and adaptively selects one for predicting quality score of a test image according to its quality level, and the latter generates patch pairs for regression under different combination strategies to make better use of reference images in network training and enlarge training data at the same time. We further improve the efficiency of network training by a new patch sampling way based on the visual importance of each patch. We conduct extensive experiments on several public databases and compare our proposed QDPC-CNN with existing state-of-the-art methods. The experimental results demonstrate that our proposed method outperforms the others both in terms of accuracy and efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.