Abstract

Perceptual video quality assessment (VQA) is an integral part of various video applications as it enables quality-control for videos delivered to end-users. Video Multi-method Assessment Fusion (VMAF) was recently proposed as a full reference VQA model that combines quality-aware features to predict perceptual quality. Owing to the limited spatiotemporal resolution capacity of the human eye, VMAF uses motion along with other spatial features to configure its model. The model implements a basic co-located luminance difference method to determine motion. It has been observed that this method is inadequate to capture the temporal characteristic of the video.In this paper, we propose an improvement to the existing temporal metric of VMAF. The newly proposed Temporal Motion Vector based VMAF (TMV-VMAF) replaces the existing temporal metric in VMAF with a block-based motion state classification method that approximates motion score of a frame by leveraging motion estimation and block level energy information. This temporal feature is fed along with other spatial features of VMAF into a Support Vector Regression framework where the model is trained using a database of HD videos. Our results show that the TMV-VMAF achieves better correlation to opinion score when compared to the existing VMAF.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.