Abstract

Learning new representations of 3D point clouds is an active research area in 3D vision, as the order-invariant point cloud structure still presents challenges for the design of neural network architectures. Recent work explored learning global, local, or multi-scale features for point clouds. However, none of the earlier methods focused on capturing contextual shape information by analyzing local orientation distributions of points. In this paper, we use point orientation distributions around a point in order to obtain an expressive local neighborhood representation for point clouds. We achieve this by dividing the spherical neighborhood of a given point into predefined cone volumes, and statistics inside each volume are used as point features. In this way, a local patch can be represented not only by the selected point’s nearest neighbors, but also by considering a point density distribution defined along multiple orientations around the point. We are then able to construct an orientation distribution function (ODF) neural network that makes use of an ODFBlock which relies on MLP (multi-layer perceptron) layers. The new ODFNet model achieves state-of-the-art accuracy for object classification on ModelNet40 and ScanObjectNN datasets, and segmentation on ShapeNet and S3DIS datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call