Abstract

Image based modeling has an inherent problem that the complete geometry and appearance of a 3D object cannot be directly acquired from limited 2D images, namely reconstruction of a 3D object when only sporadic views are available is challenging due to occlusions and ambiguities within limited views. In this paper, we present a generative network architecture to address the problem of single image based modeling by learning multi-view manifold of 3D objects, which we call Multi-view GAN. Penalties for shape identity consistency and view diversity are introduced to guide the learning process, and Multi-view GAN can provide a powerful representation which consists of 3D descriptors both for shape and view. This disentangled and oriented representation affords us to explore the manifold of views, thus one can detail a 3D object without “blind spot” even if only single view is available. We have evaluated our method on multi-view and 3D shape generation with a wide range of examples, and both qualitative and quantitative results demonstrate that our Multi-view GAN significantly outperforms state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.