Abstract
Identification of rigid-body objects from point clouds is a well-studied topic and one of the most important utilities of 3D computer vision in robotics, industrial automation, unmanned systems and autonomous vehicle applications. However, simultaneous retrieval of both identity and pose of objects is still an open problem because of the challenges attributed to the vast search space of all possible poses of an object. Further, the precision of pose retrieval is inherently hard to improve because the resolution of the pose database is limited. This paper presents a point cloud retrieval method that finds the object ID and pose, simultaneously. Unlike the traditional learning-based methods that retrieve the point cloud by matching high dimensional feature vectors, we solve the point cloud retrieval problem using an end-to-end point cloud convolution neural network (CNN) model that linearly projects the 3D point cloud onto a unique discriminating 2D view using an energy-based loss function. Based on the proposed mechanism of the projection network, the rigid-body pose is obtained using the reduced QR factorization. As a result, the object ID retrieval problem is simplified because the within-class differences caused by various poses of the object are removed. The projection results exhibit strong discriminating features based on shapes while the precision of pose retrieval is improved thanks to the generalization ability of the network model. Comprehensive experiments show that the proposed method yields considerably more robust and accurate object ID retrieval, and at the same time significantly improves the precision of pose retrieval compared to the descriptor-based methods as well as two commonly used learning-based methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.