Abstract

Raw point clouds obtained from real-world scanning are always incomplete and ununiformly distributed, which would result in structural losses of object shapes and bring about difficulties in further high-level 3D vision tasks. Therefore, a learning-based method called CRA-Net is proposed in this paper to repair partial point clouds and predict complete object shapes. Compared with most existing networks that only leverage global features, CRA-Net successfully utilizes local features to restore clearer details of object shapes with low instability. First, we propose an adaptive neighborhood query method that is able to adjust query centers and radiuses to cover different object shapes and acquire balanced local regions. Second, we build a parallel encoder to extract multiscale features from the input. Third, we design a cross-regional attention module based on graph attention network. It quantifies underlying relationships among all the local features under certain conditions interpreted by global features. Based on such relationships, each conditional local feature vector is able to search across the regions and selectively absorb other local features. Fourth, we design a coarse decoder to collect these cross region features and generate the skeleton of complete point cloud. Finally, we refine the coarse point cloud by comparing it with the input, and up sample it using folding-based layers.Our network is first trained and tested on manually made partial-complete point clouds pairs generated by the scanning process of a virtual LiDAR on eight categories of objects. Then it is tested on real-world point clouds of indoor and outdoor scenes. Compared with existing representative methods, our CRA-Net always restores the most accurate point clouds with the clearest details.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call