Abstract

Point cloud generation aims to transfer a latent code to realistic 3D shapes through generative models. However, most of progressive generative methods ignore the spatial relationship among different stages and suffer from the loss of contextual information. To address this issue, we propose dual-stream progressive refinement adversarial network (DPR-GAN), which utilizes a dual-stream structure to establish the relationship between two adjacent stages. Such a mechanism can learn the spatial context and preserve more spatial details of point clouds at different stages. In addition, DPR-GAN adopts 3D gridding transformation to guide the shape deformation. In this way, 3D gridding transformation can learn a reasonable correspondence between the local regions of 3D shapes and latent codes. Benefiting from the uniformity and adaptability of the 3D grids, our proposed DPR-GAN can improve the quality and consistency of generated point clouds. We conduct comprehensive experiments to demonstrate that the proposed DPR-GAN is capable of generating pluralistic point clouds, as compared with state-of-the-art generation methods in terms of both visual and quantitative evaluations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call