Abstract

With the increasing prevalence of multi-view and free-viewpoint videos, the view synthesis has gained extensive attention. In view synthesis, the new viewpoint is generated from color and depth images by a depth-image-based rendering (DIBR) algorithm. The quality of color and depth images is crucial for generating a new high-quality viewpoint. However, DIBR is computationally expensive, thus how to infer the quality of DIBR-synthesized image by only using the input color and depth images is highly desired. With this motivation, this paper presents a no-reference metric to predict the quality of DIBR-synthesized images using statistics of fused color-depth images without actual DIBR process. Specifically, a wavelet-based fusion strategy is first proposed to simulate the interactions between color and depth distortions during the DIBR process. Then the multi-scale statistical features are extracted from the fused color-depth image. Finally, the back propagation (BP) regression network is employed for building the quality prediction model for DIBR-synthesized images. The experimental results demonstrate the superiority of the proposed metric over the state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call