Abstract

Video quality as perceived by human observers is the ground truth when Video Quality Assessment (VQA) is in question. It is dependent on many variables, one of them being the content of the video that is being evaluated. Despite the evidence that content has an impact on the quality score the sequence receives from human evaluators, currently available VQA databases mostly comprise of sequences which fail to take this into account. In this paper, we aim to identify and analyze differences between human cognitive, affective, and conative responses to a set of videos commonly used for VQA and a set of videos specifically chosen to include video content which might affect the judgment of evaluators when perceived video quality is in question. Our findings indicate that considerable differences exist between the two sets on selected factors, which leads us to conclude that videos starring a different type of content than the currently employed ones might be more appropriate for VQA.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.