Abstract

This paper presents a deep CNN approach for point-based semantic scene labeling. This is challenging because 3D point clouds do not have a canonical domain and can have complex geometry and substantial variation of sampling densities. We propose a novel framework where the convolution operator is defined on depth maps around sampled points, which captures characteristics of local surface regions. We introduce Depth Mapping (DM) and Reverse Depth Mapping (RDM) operators to transform between the point domain and the depth map domain. Our depth map based convolution is computationally efficient, robust to scene scales and sampling densities, and can capture rich surface characteristics. We further propose to augment each point with feature encoding of the local geometric patches resulted from multi-method through patch pooling network (PPN). The patch features provide complementary information and are fed into our classification network to achieve semantic segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.