Abstract

This paper proposes a method to render free viewpoint images from omnidirectional videos using a deformable 3-D mesh model. In the proposed method, a 3-D mesh is placed in front of a virtual viewpoint and deformed by using the pre-estimated omnidirectional depth maps that are selected on the basis of position and posture of the virtual viewpoint. Although our approach is fundamentally based on the model-based rendering approach that renders a geometrically correct virtualized world, in order to avoid the hole problem, we newly employ a viewpoint-dependent deformable 3-D model instead of the use of a unified 3-D model that is generally used in the model based rendering approach. In experiments, free-viewpoint images are generated from the omnidirectional video captured by an omnidirectional multi camera system to show the feasibility of the proposed method for walk-through applications in the virtualized environment.

Highlights

  • One of the typical goals of representation and modeling of a large-scale 3-D environment is to create a high-quality virtualized world based on the real environment

  • The contributions of this paper are summarized as follows: (1) the geometry of the scene for the virtual view point can be immediately recovered by fitting a deformable mesh model to pre-estimated omnidirectional depth maps for original viewpoints, (2) the 3-D mesh model is deformed as optimal shape for the scene structure so that no holes appear in generated images, and (3) omnidirectional free-viewpoint rendering is achieved by using omnidirectional video sequences as input

  • We have proposed an omnidirectional free-viewpoint rendering method that uses a view-dependent 3-D mesh model

Read more

Summary

INTRODUCTION

One of the typical goals of representation and modeling of a large-scale 3-D environment is to create a high-quality virtualized world based on the real environment. The most simple method for novel view synthesis by IBR approach is the morphing-based method that directly warps the images using the corresponding points in the pair of images [10, 11] By using this method, we can generate realistic images for the virtual camera placed between the original camera positions. The contributions of this paper are summarized as follows: (1) the geometry of the scene for the virtual view point can be immediately recovered by fitting a deformable mesh model to pre-estimated omnidirectional depth maps for original viewpoints, (2) the 3-D mesh model is deformed as optimal shape for the scene structure so that no holes appear in generated images, and (3) omnidirectional free-viewpoint rendering is achieved by using omnidirectional video sequences as input.

FREE-VIEWPOINT RENDERING USING VIEW-DEPENDENT 3-D MESH MODEL
Definition of energy function
Selection of depth map
Initialization of mesh model
Deformation of mesh model
View-dependent texture mapping
Acquisition of input data
Free-viewpoint rendering for straight routes
Computational cost
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call