Abstract
The interest in objective quality assessment have significantly increased over the past decades. Several objective quality metrics have been proposed and made publicly available, moreover, several subjective quality assessment databases are distributed in order to evaluate and compare the metrics. However, several question arises: are the objective metrics behaviours constant across databases, contents and distortions? how significantly the subjective scores might fluctuate on different displays (i.e. CRT or LCD)? which objective quality metric might best evaluate a given distortion? In this article, we analyse the behaviour of four objective quality metrics (including PSNR) tested on three image databases. We demonstrate that the performances of the quality metrics can strongly fluctuate depending on the database used for testing. We also show the consistency of all metrics for two distinct displays.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.