Abstract

This paper presents a new hybrid Kinect-variety-based synthesis scheme that renders artifact-free multiple views for autostereoscopic/automultiscopic displays. The proposed approach does not explicitly require dense scene depth information for synthesizing novel views from arbitrary viewpoints. Instead, the integrated framework first constructs a consistent minimal image–space parameterization of the underlying 3D scene. The compact representation of scene structure is formed using only implicit sparse depth information of a few reference scene points extracted from raw RGB depth data. The views from arbitrary positions can be inferred by moving the novel camera in parameterized space by enforcing Euclidean constraints on reference scene images under a full-perspective projection model. Unlike the state-of-the-art depth image-based rendering (DIBR) methods, in which input depth map accuracy is crucial for high-quality output, our proposed algorithm does not depend on precise per-pixel geometry information. Therefore, it simply sidesteps to recover and refine the incomplete or noisy depth estimates with advanced filling or upscaling techniques. Our approach performs fairly well in unconstrained indoor/outdoor environments, where the performance of range sensors or dense depth-based algorithms could be seriously affected due to scene complex geometric conditions. We demonstrate that the proposed hybrid scheme provides guarantees on the completeness, optimality with respect to the inter-view consistency of the algorithm. In the experimental validation, we performed a quantitative evaluation as well as subjective assessment of the scene with complex geometric or surface properties. A comparison with the latest representative DIBR methods is additionally performed to demonstrate the superior performance of the proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call