Medial mesh is the most commonly used representation of medial axis transform (MAT) which is a high-fidelity compact representation of 3D shape. Existing learning methods of extracting medial mesh from point clouds either have strict requirements of supervising training data, or are incapable of learning the complete form of medial mesh, leading to large reconstruction error. We introduce Point2MM, an unsupervised method for learning a complete medial mesh from point clouds. Our key idea is to use the envelop geometry of medial primitives – spheres, cones, and slabs – to capture the intrinsic geometry of shape, and the connectivity of medial mesh to capture the topology of shape. We firstly predict initial medial spheres by learning the geometric transformation of point clouds, then construct an initial connectivity of the medial spheres by learning the probability of medial cones and medial slabs with a novel unsupervised formulation. Finally we propose an iterative strategy for fine-tuning medial primitives. Extensive evaluations and comparisons show our method has superior accuracy and robustness in learning medial mesh from point clouds. In addition, the excessive training time is also a concern for our research, and it is a limitation where we need to make improvements in our future work.
Read full abstract