Abstract

In the majority of cases, the paired dataset is not easy to obtain, which makes it difficult to train a supervised point cloud upsampling network directly. Therefore, we propose a novel siamese network architecture for the unsupervised upsampling task, called UPU-SNet. This architecture contains two branches based on hierarchical spatial-aware transformers. Specifically, one branch includes a global-local transformer (GLT) module and a standard reconstruction module for coarse point generation. The other contains a shared GLT module, a transformer shuffle (TranShuffle) module and a shared reconstruction module for dense point generation. For multi-level feature extraction, the GLT module we propose aims to explore feature fusions of global-to-local and local-to-global, respectively, with the aid of the multi-head cross-attention mechanism of the transformers. Then, the TranShuffle module is constructed closely after the GLT module to further optimize and expand the extracted features. Moreover, taking full account of sparse points, coarse points and dense points, we design a joint loss function, so as to support the network to generate denser and more uniform point clouds without using ground-truth. Through extensive experiments, we show how our proposal not only outperforms existing unsupervised methods but also achieves competitive results against previous supervised and self-supervised methods. Also, we present various surface reconstruction results on both synthetic and real point clouds, and show our network can enable accurate 3D reconstructions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call