Abstract

Recently, deep-learning-based blind picture quality measurement (BPQM) metrics have gained significant attention. However, training a robust deep BPQM metric remains a difficult and challenging task because of the limited number of subject-rated training samples. State-of-the-art full-reference (FR) picture quality measurement (PQM) metrics are in good agreement with human subjective quality scores. Therefore, they can be employed to approximate human subjective quality scores to train BPQM metrics. Inspired by this, we propose a deep encoder–decoder architecture (DEDA) for opinion-unaware (OU) BPQM that does not require human-labeled distorted samples for training. In the training procedure, to avoid overfitting and to ensure the independency of the training and testing samples, we first construct 6,000 distorted pictures and use their objective quality/similarity maps obtained using a high-performance FR-PQM metric for distorted pictures as training labels. Subsequently, an end-to-end map between the distorted pictures and their objective quality/similarity maps (labels) is learned, represented as the DEDA that takes the distorted picture as the input and outputs its predicted quality/similarity map. In the DEDA, the pyramid supervision training strategy is used, which applies supervised learning over three scale layers to efficiently optimize the parameters. In the testing procedure, the quality/similarity maps of the testing samples that can help localize distortions can be predicted with the trained DEDA architecture. The predicted quality/similarity maps are then gradually pooled together to obtain the overall objective quality scores. Comparative experiments on three publicly available standard PQM datasets demonstrate that our proposed DEDA metric is in good agreement with subjective assessment compared to previous state-of-the-art OU-BPQM metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.