Abstract

Nowadays, live broadcasting video has become increasingly popular and high-quality live broadcasting video is highly needed. In practice, live broadcasting videos usually undergo several processing stages, which inevitably introduce multiple distortions, e. g. frame freezing and intensity mutation, causing the degraded quality of experience. However, little work has been done to the quality evaluation of live broadcasting videos, which may hinder the further development of more advanced live broadcasting video delivery systems. Motivated by this, this study presents a no-reference quality evaluation model for live broadcasting videos (LBVQA) in temporal and spatial domains. In the temporal domain, statistic features are extracted to measure the frame freezing and intensity mutation, and the entropy-based feature is extracted to describe the global jitter. In the spatial domain, blurring is measured based on phase coherence, and abnormal exposure ratio is calculated based on an adaptive threshold. Finally, all features are fed into a backpropagation neural network to train the quality prediction model. Experimental results on the Live Broadcasting Video Database demonstrate the advantages of the proposed metric over the state-of-the-art image and video quality metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call