Abstract

Automatic video quality assessment of user-generated content (UGC) has gained increased interest recently, due to the ubiquity of shared video clips uploaded and circulated on social media platforms across the globe. Most existing video quality models developed for this vast content are trained on large numbers of samples labeled during large-scale subjective studies, which are often fail to exhibit adequate generalization abilities on unseen data. Moreover, large labeled video quality datasets are not always available for every scenario, and may not address the coincident evaluation of social videos and the distortions that afflict them. Because of this, it is also desirable to develop opinion-unaware, “completely blind” video quality models, that are free of training, yet can compete with existing learning-based models. Here we propose such a model called VIQE (VIdeo Quality Evaluator), which we designed based on a comprehensive analysis of patch- and frame-wise video statistics, as well as of space-time statistical regularities of videos. The statistical features desired from the analysis capture complementary predictive aspects of perceptual quality, which are aggregated to obtain final video quality scores. Extensive experiments on recent large-scale video quality databases demonstrate that VIQE is even competitive with state-of-the-art opinion-aware models. The source code is being made available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/uniqzheng/Complete-Blind-VQA</uri> .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call