Abstract

As short video industry grows up, quality assessment of user generated videos has become a hot issue. Existing no reference video quality assessment methods are not suitable for this type of application scenario since they are aimed at synthetic videos. In this paper, we propose a novel deep blind quality assessment model for user generated videos according to content variety and temporal memory effect. Content-aware features of frames are extracted through deep neural network, and a patch-based method is adopted to obtain frame quality score. Moreover, we propose a temporal memory-based pooling model considering temporal memory effect to predict video quality. Experimental results conducted on KoNViD-1k and LIVE-VQC databases demonstrate that the performance of our proposed method outperforms other state-of-the-art ones, and the comparative analysis proves the efficiency o f our temporal pooling model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call