Abstract

The advancement in Image-to-Image translation techniques using generative Deep Learning-based approaches has shown promising results for the challenging task of inpainting-based 3D view synthesis. At the same time, even the current 3D view synthesis methods often create distorted structures or blurry textures inconsistent with surrounding areas. We analyzed the recently proposed algorithms for inpainting-based 3D view synthesis and observed that these algorithms no longer produce stretching and black holes. However, the existing databases such as IETR, IRCCyN, and IVY have 3D-generated views with these artifacts. This observation suggests that the existing 3D view synthesis quality assessment algorithms can not judge the quality of most recent 3D synthesized views. With this view, through this abstract, we analyze the need for a new large-scale database and a new perceptual quality metric oriented for 3D views using a test dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call