Abstract

Establishing the relationship of 2D images and 3D point clouds is a solution to establish the spatial relationship between 2D and 3D space, i.e. AR virtual-real registration. In this paper, we propose a network, 2D3D-GAN-Net, to learn the local invariant cross-domain feature descriptors of 2D image patches and 3D point cloud volumes. Then, the learned local invariant cross-domain feature descriptors are used for matching 2D images and 3D point clouds. The Generative Adversarial Networks (GAN) is embedded into the 2D3D-GANNet, which is used to distinguish the source of the learned feature descriptors, facilitating the extraction of invariant local cross-domain feature descriptors. Experiments show that the local cross-domain feature descriptors learned by 2D3D-GAN-Net are robust, and can be used for cross-dimensional retrieval on the 2D image patches and 3D point cloud volumes dataset. In addition, the learned 3D feature descriptors are used to register the point cloud for demonstrating the robustness of learned local cross-domain feature descriptors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call