Abstract
Blind video quality assessment (VQA) metrics predict the quality of videos without the presence of reference videos. This paper proposes a new blind VQA model based on multilevel video perception, abbreviated as MVP. The model fuses three levels of video features occurring in natural video scenes to predict video quality: natural video statistics (NVS) features, global motion features and motion temporal correlation features. They represent video scene characteristics, video motion types, and video temporal correlation variations. In the process of motion feature extraction, motion compensation filtering video enhancement is adopted to highlight the motion characteristics of videos so as to improve the perceptual correlations of the video features. The experimental results on the LIVE and CSIQ video databases show that the predicted video scores of the new model are highly correlated with human perception and have low root mean square errors. MVP obviously outperforms state-of-art blind VQA metrics, and particularly demonstrates competitive performance even compared against top-performing full reference VQA metrics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.