Abstract

Providing adequate Quality of Experience (QoE) to end-users is crucial for streaming service providers. In this paper, in order to realize automatic quality assessment, a No-Reference (NR) bitstream Human-Vision-System-(HVS)-based video quality assessment (VQA) model is proposed. Inspired by discoveries from the neuroscience community, which suggest there is a considerable overlap between active areas of the brain when engaging in video quality assessment and saliency detection tasks, saliency maps are used in the proposed method to improve the quality assessment accuracy. To this end, saliency maps are first generated from features extracted from the HEVC bitstream. Then, saliency map statistics are employed to create a model of visual memory. Finally, a support vector regression pipeline learns an estimate of the video quality from the visual memory, saliency, and frame features. Evaluations on SJTU dataset indicate that the proposed bitstream based no-reference video quality assessment algorithm achieves a competitive performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call