Abstract

In this paper we evaluate two objective quality measures, the root-mean-square-error and a model based on the human visual system (HVS), on their ability to predict the perceived image quality for variations in bit-rate, processing method, and scene content. In theory quality metrics should be able to predict the perceived image quality independent of these variations. However, one can imagine that in practice this is not trivial to meet. But also subjects might have difficulties in making comparisons across processing methods or across scenes. In order to test whether subjects use separate quality scales for each identifiable scene and processing method or whether they use a single quality scale, we set up experiments in which the influence of bit-rate, processing method, and scene content was measured. In all experiments subjects were instructed to judge the quality difference between two simultaneously presented images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.