Abstract

Metaverse applications often require many new 3D point cloud models that are unlabeled and that have never been seen before; this limited information results in difficulties for data-driven model analyses. In this paper, we propose a novel data-driven 3D point cloud analysis network GN-CNN that is suitable for such scenarios. We tackle the difficulties with a few-shot learning (FSL) approach by proposing an unsupervised generative adversarial network GN-GAN to generate prior knowledge and perform warm start pre-training for GN-CNN. Furthermore, the 3D models in the Metaverse are mostly acquired with a focus on the models’ visual appearances instead of the exact positions. Thus, conceptually, we also propose to augment the information by unleashing and incorporating local variance information, which conveys the appearance of the model. This is realized by introducing a graph convolution-enhanced combined multilayer perceptron operation (CMLP), namely GCMLP, to capture the local geometric relationship as well as a local normal-aware GeoConv, namely GNConv. The GN-GAN adopts an encoder–decoder structure and the GCMLP is used as the core operation of the encoder. It can perform the reconstruction task. The GNConv is used as the convolution-like operation in GN-CNN. The classification performance of GN-CNN is evaluated on ModelNet10 with an overall accuracy of 95.9%. Its few-shot learning performance is evaluated on ModelNet40, when the training set size is reduced to 30%, the overall classification accuracy can reach 91.8%, which is 2.5% higher than Geo-CNN. Experiments show that the proposed method could improve the accuracy in 3D point cloud classification tasks and under few-shot learning scenarios, compared with existing methods such as PointNet, PointNet++, DGCNN, and Geo-CNN, making it a beneficial method for Metaverse applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call