In video communication, the quality of video is mainly determined by bitrate in general. Moreover, the effect of video contents and their visual perception on video quality assessment (VQA) is often overlooked. However, in fact, for different videos, although the bitrates are the same, their VQA scores are still significantly different. Hence, it is assumed that the bitrate, video contents, and human visual characteristics mainly affect the VQA. Based on the above three aspects, in this paper, we designed a bitrate-based no-reference (NR) VQA metric combining the visual perception of video contents, namely, BRVPVC. In this metric, first an initial VQA model was proposed by only considering the bitrate alone. Then, the visual perception model for video contents was designed based on the texture complexity and local contrast of image, temporal information of video, and their visual perception features. Finally, two models were synthesized by adding certain weight coefficients into an overall VQA metric, namely, BRVPVC. Furthermore, ten reference videos and 150 distorted videos in the LIVE video database were used to test the metric. Moreover, based on the results of evaluating the videos in LIVE, VQEG, IRCCyN, EPFL-PoliMI, IVP, CSIQ, and Lisbon databases, the performance of BRVPVC is respectively compared with that of six full-reference (FR) metrics and ten NR VQA metrics. The results show that our VQA metric has a higher accuracy than six common FR VQA metrics and eight NR VQA metrics, and it is close to other two NR VQA metrics in accuracy. The corresponding values of Pearson linear correlation coefficient and Spearman rank order correlation coefficient reached 0.8547 and 0.8260, respectively. In addition, the computational complexity of proposed VQA metric is lower than video signal-to-noise ratio, video quality model, motion-based video integrity evaluation, spatiotemporal most apparent distortion, V-BLINDS, and V-CORNIA metrics. Moreover, the proposed metric has a better generalization property than these metrics.
Read full abstract