Abstract

How to reconstruct the complete 3D mesh model from a single natural image is now still considered as a challenging problem. Most existing methods describe 3D shapes in the form of voxel or point cloud, and it is not always trivial to convert them into meshes with high quality. In this paper, we present a novel method to effectively address this problem by using a specially-designed GAN model to map a given natural image to a geometry image, from which the corresponding 3D mesh can be reconstructed. Specifically, we disentangle the tasks of viewpoint estimation and 3D reconstruction, ensuring that the reconstruction network focuses on generating vivid 3D meshes with accurate viewpoint information. We also add a differentiable module to create silhouettes from various viewpoints for the synthesized geometry image, aiming to improve the consistency between the generated 3D model and its input 2D image. Furthermore, we design a compact but effective discriminator for geometry images to guarantee a plausible overall contour of the generated object. Experiments conducted on a publicly available database demonstrate that the proposed method can generate 3D meshes with high fidelity and outperforms other state-of-the-art approaches in both qualitative and quantitative results. Our code is publicly available at https://github.com/tasx0823/EasyMesh.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call