Abstract
In this work, we propose a novel method for generating 3D point clouds that leverages the properties of hypernetworks. Contrary to the existing methods that learn only the representation of a 3D object, our approach simultaneously finds a representation of the object and its 3D surface. The main idea of our HyperCloud method is to build a hypernetwork that returns weights of a particular neural network (target network) trained to map points from prior distribution into a 3D shape. As a consequence, a particular 3D shape can be generated using point-by-point sampling from the prior distribution and transforming the sampled points with the target network. Since the hypernetwork is based on an auto-encoder architecture trained to reconstruct realistic 3D shapes, the target network weights can be considered to be a parametrization of the surface of a 3D shape, and not a standard representation of point cloud usually returned by competitive approaches. We also show that relying on hypernetworks to build 3D point cloud representations offers an elegant and flexible framework. To that point, we further extend our method by incorporating flow-based models, which results in a novel HyperFlow approach.
Highlights
T ODAY many registration devices, such as LIDARs and depth cameras, are able to capture RGB channels and depth estimates
We postulate that using hypernetworks to build powerful 3D point representations offers an elegant and flexible framework and, to that end, we introduce a more general method for creating such representations that encompasses the existing flow models
We look at the cross-sections of the reconstructions to observe the main differences in how the input distribution is transformed into a final model by the target network
Summary
T ODAY many registration devices, such as LIDARs and depth cameras, are able to capture RGB channels and depth estimates. We consider a point cloud as a sample from a distribution on object surfaces with additive noise introduced by a registration device, such as LIDAR To model this distribution, we propose a new Spherical Log-Normal function, which mimics the topology of 3D objects and provides noncompact support. The resulting general framework which we call HyperFlow produces state-of-the-art generative results both for point clouds and mesh representations, while reducing the training time and corresponding memory footprint of the model by over an order of magnitude with respect to the competing flow-based methods. Log-Normal probability distribution that enables the generalization of HyperCloud framework in order to encompass flow-based models and in Sec. 5 we introduce the generalized HyperFlow method for building 3D point cloud representations.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have