Abstract

Global localization is an integral part of indoor robot navigation. Traditional methods implemented at image-level still face challenges when encountering dynamic environments, illumination variation and large viewpoint changes. Methods based on 3D point cloud have low computational efficiency and high localization difficulty. In this case, an efficient global localization framework based on graph matching to solve the problem of low localization stability in object-level navigation and robot abduction is proposed. Firstly, semantic information is extracted by the instance segmentation network. An efficient 3D bounding box extraction algorithm is used to obtain the object-level map. Then the object-level map is transformed into a semantic topological graph and preprocessed. The feature matrix of the map is obtained by the tree generation and the graph kernel method. The matching method based on the voting mechanism is adopted to realize the correspondence between the local graph and the global graph. Finally, the pose of the robot is obtained by a two-stage point cloud registration method. Experiments are carried out in SceneNN dataset and three real indoor environments. Extensive experiments show that our approach reaches high accuracy and can achieve robust global localization under large viewpoint changes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.