Abstract

Point clouds, as the native output of many real-world 3D sensors, can not be trivially consumed by convolutional networks in the same way with the 2D image. This is mainly caused by the irregular organization of points. In this paper, we propose a new convolution operation, named Feature Interpolation Convolution (FI-Conv), which is computationally efficient, invariant to the order of points, and robust to different samples and varying densities. First, point clouds are viewed as discrete samples of continuous space. The feature corresponding to one point is seen as a sampled point of continuous feature function. We desire a set of points, named key points, to describe the important locations of the convolution and relatively stable points of the feature function, such as extreme or inflection points. In our method, the positions of key points are trainable parameters of the networks, i.e., we can optimize the positions of key points. Then, we interpolate point features onto the learned key points. Finally, a standard convolution operation is applied to these estimated features. We use FI-Conv to replace the convolution operations of some cutting-edge networks. Experiments show that FI-Conv effectively improves the performance of these networks and achieves on par or better performance than state-of-the-art methods on multiple challenging benchmark datasets and tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call