Abstract
Accurate and efficient video quality assessment (VQA) methods are important guidelines for optimizing network video quality, improving video compression performance, and recommending compression coding parameters. No-reference VQA methods are mainly divided into bitstream-based methods, which are less time-consuming but less accurate, and pixel-based methods, which are more accurate but more time-consuming and require high hardware requirements. Both methods are not applicable to the quality assessment of a large number of network videos. To address the above problem, we propose an efficient and accurate cross-domain no-reference network video quality assessment method (CDNRVQA) based on bitstream. CDNRVQA takes I-frames as representative frames and uses deep neural network to obtain video high semantic features and distortion features from the representative frames, then obtains temporal features and some spatial features of the video based on the macroblock and motion vector in the compressed videos, and uses feature fusion strategy to fuse features from pixel domain and compressed domain, and finally obtains comprehensive cross-domain features which can represent the video distortion brought by content, motion and compression. CDNRVQA performs well on large VQA datasets. CDNRVQA achieves similar performance to the state-of-the-art VQA models, but greatly improves prediction time, and greatly reduces hardware consumption on the device. CDNRVQA makes it possible to apply VQA methods to video platforms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.