Abstract
The “color-plus-depth” format represents a 3D scene using multiple color and depth images captured by an array of closely spaced cameras. Using this format, a novel image as observed from a horizontally shifted virtual viewpoint can be synthesized via depth-image-based rendering (DIBR), using neighboring camera-captured viewpoint images as reference. In this paper, using the same popularized color-plus-depth representation, we propose to construct, in addition, novel images as observed from virtual viewpoints closer to the 3D scene, enabling a new dimension of view navigation. To construct this new image type, we first perform a new DIBR pixel-mapping for $z$ -dimensional camera movement. We then identify expansion holes—a new kind of missing pixels unique in $z$ -dimensional DIBR-mapped images—using a depth layering procedure. To fill expansion holes we formulate a patch-based maximum a posteriori problem, where the patches are appropriately spaced using diamond tiling. Leveraging on recent advances in graph signal processing, we define a graph-signal smoothness prior to regularize the inverse problem. Finally, we design a fast iterative reweighted least square algorithm to solve the posed problem efficiently. Experimental results show that our $z$ -dimensional synthesized images outperform images rendered by a naive modification of VSRS 3.5 by up to $4.01$ dB.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have