Abstract

Subjective experimental results are widely used as the ground truth in objective Image Quality Assessment (IQA). Specifically, Pairwise Comparison method has superiority over Mean Opinion Scores (MOS), but there is a problem when measuring the consistency between subjective pairwise comparisons and objective quality predictions. In this paper, we first analyze the existing problem of current evaluation method for the consistency between the pairwise comparisons given by human subjects and the ranking results given by objective IQA algorithms. Then we propose a new direct evaluation method, Ranking Consistent Rate, to solve this problem. Moreover, through our method, we can check the self-consistency of datasets based on pairwise comparisons and evaluate the performance of an IQA algorithm more accurately.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.