Abstract

Point cloud is a fundamental 3D representation that has drawn increasing attention due to the popularity of various depth scanning devices. Effectively and accurately synthesizing point cloud is a challenging task due to the high-frequency (spatial) geometric details and the high-dimensionality of the extrinsic observation space. In this article, we focus on how to capture the informative intrinsic structure from the latent low-dimensional space and the diversity from the ambient space simultaneously. As a result, a new framework consisting of dual alternating generator and discriminator pairs is proposed to create various diverse and realistic geometries. We evaluate our model on both generation and completion tasks, covering several public datasets(ModelNet40, ShapeNet, Kitti, et al). Extensive experiments demonstrate the effectiveness of the framework on 3D point cloud synthesis. Based on this unified framework, we can not only achieve the state-of-the-art performance compared with several well-known point cloud generative models, but also get the competitive result in the task of completion by making some minor adjustments to the network structure. Moreover, the proposed method exhibits competitive performance on MNIST 2D dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call