Abstract

Local feature description is the fundamental research topic for 3D rigid data matching. However, how to achieve a good balanced performance of the local shape descriptor among descriptiveness, robustness, compactness and efficiency remains a challenging task. For this purpose, we propose a novel feature representation of 3D local surface called multi-view depth and contour signatures (MDCS). Key to MDCS descriptor is multi-view and multi-attribute description to provide a comprehensive and effective geometric information. Specifically, we first construct a repeatable Local Reference Frame (LRF) for the local surface to achieve rotation invariance. Then we integrate the depth information characterized in a local coordinate manner and the 2D contour cue derived from 3D-to-2D projection, forming the depth and contour signatures (DCS). Finally, MDCS is generated by concatenating all the DCS descriptors captured from three orthogonal view planes in the LRF into a vector. The performance of the MDCS method is evaluated on several data modalities (i.e., LiDAR, Kinect, and Space Time) with respect to Gaussian noise, varying mesh resolutions, clutter and occlusion. Experimental results and rigorous comparisons with the state-of-the-arts validate that our approach achieves the superior performance in terms of descriptiveness, robustness, compactness and efficiency. Moreover, we further demonstrate the feasibility of MDCS in matching of both LiDAR and Kinect point clouds for 3D vision applications and evaluate the generalization ability of the proposed method on real-world datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call