Abstract

Although Convolutional Neural Networks (CNNs) have achieved large successes on image data, the attributes of point cloud data, such as its irregular format and sparse 3D distribution, prevent CNNs from being applied to point cloud data directly. Although considerable works, e.g. Pointnet-like methods, Transformer, and graph methods, have been adopted to process point clouds, these works can not consume spatial information directly like CNNs, leading to the loss of spatial information. Our Continuous Volumetric Convolution Network (CVCN), featuring a novel self-learning continuous convolution kernel, is proposed to address this problem. The continuous convolution kernel omits the manually defined kernel function and the manually set positions of kernel points, which brings convenience and flexibility. Moreover, CVCN hybridizes continuous convolutions with traditional CNNs to eliminate the time-consuming Farthest Point Sampling algorithm. Compared with the state-of-the-art works, competitive results have been achieved on point cloud classification and segmentation tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call