Abstract

In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-efficiency view synthesis quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the perceptual quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-quality database, compared with other full-reference and no-reference video-quality assessment metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call