Abstract

Point cloud reconstruction has made great progress with the application of deep learning, but the blurred edges and sparse distribution of point clouds remain huge challenges in this field. In this paper, we propose a Cascaded Generative Network (CGNet) to reconstruct dense point clouds from a single image. To preserve shape features, the pre-reconstruction network is combined with the up-sampling network to construct the multi-stage generation framework. In the generation process, an image re-description mechanism is designed to supervise the entire network by regenerating images from the reconstructed point clouds. Furthermore, the generative network introduces a siamese structure to extract consistent high-level semantic from multiple images. Extensive experiments on the ShapeNet dataset demonstrate that CGNet outperforms the state-of-the-art point cloud reconstruction methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call