Abstract

Blind video quality assessment (VQA) metrics predict the quality of videos without the presence of reference videos. This paper proposes a new blind VQA model based on multilevel video perception, abbreviated as MVP. The model fuses three levels of video features occurring in natural video scenes to predict video quality: natural video statistics (NVS) features, global motion features and motion temporal correlation features. They represent video scene characteristics, video motion types, and video temporal correlation variations. In the process of motion feature extraction, motion compensation filtering video enhancement is adopted to highlight the motion characteristics of videos so as to improve the perceptual correlations of the video features. The experimental results on the LIVE and CSIQ video databases show that the predicted video scores of the new model are highly correlated with human perception and have low root mean square errors. MVP obviously outperforms state-of-art blind VQA metrics, and particularly demonstrates competitive performance even compared against top-performing full reference VQA metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call