Abstract

Three-dimensional point cloud data serves as a critical source of information in various real-world application domains, such as computer vision, robotics, geographic information systems, and medical image processing. Due to the discrete and unordered nature of point clouds, applying 2D image feature extractors directly to the extraction of 3D point cloud features is challenging. Therefore, we propose a novel variational feature component extraction method called PointFEA. This paper aims to research and propose a series of methods to enhance the feature extraction and representation learning of 3D point cloud data. Firstly, in terms of feature extraction, local neighborhood encoding is combined with the local latent representation of point clouds to obtain more correlated point cloud features. Secondly, in the domain of point cloud representation learning, the multi-scale representation learning method maps point cloud data into a high-dimensional space to better capture critical features and adapt to different granularities of point cloud data. Lastly, features of different dimensions are input into a cross-fusion transformer to obtain local attention coefficients. We validate our methods on commonly used point cloud datasets, and the experiments demonstrate the effectiveness of our approach, achieving accuracies of 94.8% on ModelNet40 and 89.1% on ScanObjectNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call