Abstract

A free-viewpoint image can be synthesized using color and depth maps of reference viewpoints, via depth-image-based rendering (DIBR). In this process, three-dimensional (3D) warping is generally used. A 3D warped image consists of disocclusion holes with missing pixels that correspond to occluded regions in the reference images, and non-disocclusion holes due to limited sampling density of the reference images. The non-disocclusion holes are those among scattered pixels of a same region or object. These holes are larger when the reference viewpoints and the free viewpoint images have a larger physical distance. Filling these holes has a crucial impact on the quality of free-viewpoint image. In this paper, we focus on free-viewpoint image synthesis that is precisely capable of filling the non-disocclusion holes caused by limited sampling density, using superpixel segmentation. In this approach, we proposed two criteria for segmenting depth and color data of each reference viewpoint. By these criteria, we can detect which neighboring pixels should be connected or kept isolated in each references image, before being warped. Polygons enclosed by the connected pixels, i.e. superpixel, are inpainted by k-means interpolation. Our superpixel approach has a high accuracy since we use both color and depth data to detect superpixels at the location of the reference viewpoint. Therefore, once a reference image that consists of superpixels is 3D warped to a virtual viewpoint, the non-disocclusion holes are significantly reduced. Experimental results verify the advantage of our approach and demonstrate high quality of synthesized image when the virtual viewpoint is physically far from the reference viewpoints.

Highlights

  • Watching in three-dimension (3D) with the freedom to select the desired viewpoint has become possible using free-viewpoint video [1,2,3,4,5] technologies

  • We evaluated the performance of free viewpoint image synthesis of the naïve 3D warping method [10], VSRS 4.0 [9], and our proposed method

  • The free-viewpoint images were synthesized with different baselines and Z values

Read more

Summary

Introduction

Watching in three-dimension (3D) with the freedom to select the desired viewpoint has become possible using free-viewpoint video [1,2,3,4,5] technologies. For free-viewpoint image synthesis, multi-view-video-plus-depth (MVD) representation is commonly used. Using the color-plus-depth data, synthesizing free-viewpoint images [6,7,8,9] is a straightforward approach. The process is as simple as backprojecting image pixels to the 3D space using per-pixel depth information and rendering them from the virtual viewpoint. This process is often referred to as 3D warping [10] or depth-image-based rendering (DIBR) [6]. When the virtual viewpoint image is physically far from the reference viewpoints, naïve 3D warping cannot produce high-quality virtual viewpoint images, due to holes created by 3D warping. A free-viewpoint image generated by naïve 3D warping has disocclusion holes and non-disocclusion

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call