Abstract
As one of the most important features for human perception, contours are widely applied in graphics and mapping applications. However, it is considerably challenging to extract contours from large-scale point clouds due to the irregular distribution of point clouds. In this letter, we propose a 3-D-guided multiconditional residual generative adversarial network (3-D-GMRGAN), the first deep-learning framework to generate contours for large-scale outdoor point clouds. To make the network handle huge amounts of points, we operate contours in the parametric space rather than raw point space, associated with a parametric chamfer distance. Then, to gather contour features from potential positions and avoid the huge solution space, we propose a guided residual generative adversarial framework, by utilizing a simple feature-based method to get the “over extraction” potential contour distribution. Experiments demonstrate that the proposed method is able to generate contours efficiently for large-scale point clouds, with fewer outliers and pseudo contours compared with state-of-the-art approaches.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have