Abstract

The training and performance analysis of objective video quality assessment algorithms is complex due to the huge variety of possible content classes and transmission distortions. Several secondary issues such as free parameters in machine learning algorithms and alignment of subjective datasets put an additional burden on the developer. In this paper, three subsequent steps are presented to address such issues. First, the content and coding parameter space of a large-scale database is used to select dedicated subsets for training objective algorithms. This aims at providing a method for selecting the most significant contents and coding parameters from all imaginable combinations. In the practical case where only a limited set is available, it also helps us to avoid redundancy in the training subset selection. The second step is a discussion on performance measures for algorithms that employ machine-learning methods. The particularity of the performance measures is that the quality of the training and verification datasets is taken into consideration. Common issues that often use existing measures are presented, and improved or complementary methods are proposed. The measures are applied to two examples of no-reference objective assessment algorithms using the aforementioned subsets of the large-scale database. While limited in terms of practical applications, this sandbox approach of objectively predicting an objectively evaluated video sequences allows for eliminating additional influence factors from subjective studies. In the third step, the proposed performance measures are applied to the practical case of training and analyzing assessment algorithms on readily available subjectively annotated image datasets. The presentation method in this part of the paper can also be used as an exemplified recommendation for reporting in-depth information on the performance. Using this presentation method, future publications presenting newly developed quality assessment algorithms may be significantly improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.