Abstract
Among the various means to evaluate the quality of video streams, light-weight No-Reference (NR) methods have low computation and may be executed on thin clients. Thus, these methods would be perfect candidates in cases of real-time quality assessment, automated quality control and in adaptive mobile streaming. Yet, existing real-time, NR approaches are not typically designed to tackle network distorted streams, thus performing poorly when compared to Full-Reference (FR) algorithms. In this work, we present a generic NR method whereby machine learning (ML) may be used to construct a quality metric trained on simplistic NR metrics. Testing our method on nine, representative ML algorithms allows us to show the generality of our approach, whilst finding the best-performing algorithms. We use an extensive video dataset (960 video samples), generated under a variety of lossy network conditions, thus verifying that our NR metric remains accurate under realistic streaming scenarios. In this way, we achieve a quality index that is comparably as computationally efficient as typical NR metrics and as accurate as the FR algorithm Video Quality Metric (97% correlation).
Highlights
No-Reference (NR) video quality methods have the potential to provide real-time video quality assessment and automated quality control, for instance in the context of mobile streaming
This is the approach we use in our work, where we aim to introduce a new NR method that combines the simplicity of NR metrics with the accuracy that is typically achieved only through heavyweight FR methods
We evaluate here the typical scenario in which our prediction based metric is assessed on video conditions that have previously been seen by the Supervised Learning (SL)
Summary
No-Reference (NR) video quality methods have the potential to provide real-time video quality assessment and automated quality control, for instance in the context of mobile streaming This is because NR algorithms are computationally light and do not require comparing the video stream under scrutiny with its original (unimpaired) benchmark, as would be the case of Full-Reference(FR) methods [1]. FR algorithms such as the Video Quality Metric (VQM) [4] have proven to correlate well with the human vision system [1] and this is the reason why many studies use them to benchmark other simpler algorithms, rather than being used directly in video management applications [5] This is the approach we use in our work, where we aim to introduce a new NR method that combines the simplicity (and applicability) of NR metrics with the accuracy that is typically achieved only through heavyweight FR methods. They are fundamental to the various applications of VQA, yet great effort has been directed towards mimicking subjective studies through completely automated processes and algorithms, as in objective QoE [10]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.