Abstract
Depth image-based rendering (DIBR) plays an important role in 3D video and free viewpoint video synthesis. However, artifacts might occur in the synthesized view due to viewpoint changes and stereo depth estimation errors. Holes are usually out-of-field regions and disocclusions, and filling them appropriately becomes a challenge. In this paper, a virtual view synthesis approach based on asymmetric bidirectional DIBR is proposed. A depth image preprocessing method is applied to detect and correct unreliable depth values around the foreground edges. For the primary view, all pixels are warped to the virtual view by the modified DIBR method. For the auxiliary view, only the selected regions are warped, which contain the contents that are not visible in the primary view. This approach reduces the computational cost and prevents irrelevant foreground pixels from being warped to the holes. During the merging process, a color correction approach is introduced to make the result appear more natural. In addition, a depth-guided inpainting method is proposed to handle the remaining holes in the merged image. Experimental results show that, compared with bidirectional DIBR, the proposed rendering method can reduce about 37% rendering time and achieve 97% hole reduction. In terms of visual quality and objective evaluation, our approach performs better than the previous methods.
Highlights
In recent years, three-dimensional (3D) video has become one of the most popular multimedia types
As the purpose of bidirectional Depth image-based rendering (DIBR) is to obtain the missing texture of the holes in single view rendering, we reduce the computational cost by only warping the regions that are useful for hole filling
In order to evaluate the efficiency of the proposed asymmetric bidirectional rendering approach, we use single-view rendering (SVR), bidirectional rendering (BR), and asymmetric bidirectional rendering (ABR) to synthesize the target view
Summary
Three-dimensional (3D) video has become one of the most popular multimedia types. Pandey et al [7] propose a virtual view synthesis method quality of data transmission. Pandey et al [7] propose a virtual view synthesis method using a single using a single Red-Green-Blue-Depth (RGBD) camera. It can generate novel renderings of the. It can generate novel renderings of the performer based performer based on the past observations from multiple viewpoints and the current RGBD image on the past observations from multiple viewpoints and the current RGBD image from a fixed view. Depth image-based rendering (DIBR) is a reliable technique for view synthesis in these cases [9].
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have