Abstract

We study the problem of mesh-based object generation. We propose a framework that generates mesh-based objects from point clouds in an end-to-end manner by using a combination of variational autoencoder and generative adversarial network. Instead of converting point cloud to other representations like voxels before input into the network, our network directly consumes the point cloud and generates the corresponding 3D object. Given point clouds of objects, our network encodes local and global geometry structures of point clouds into latent representations. These latent vectors are then leveraged to generate the implicit surface representations of objects corresponding to those point clouds. Here, the implicit surface representation is Signed Distance Function (SDF) which preserves the inside-outside information of objects. Then we can easily reconstruct polygon mesh surfaces of objects. This could be very helpful in a situation where there is a need of 3D shapes and only point clouds of objects are available. Experiments demonstrate that our network which makes use of both local and global geometry structure can generate high-quality mesh-based objects from corresponding point clouds. We also show that using PointNet-like structure as an encoder can help to achieve better results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call