Abstract
Indoor localization is essential for robotic navigation by using different sensors on board. Specifically, visual localization with a single camera is a great challenge in highly symmetric environments (e.g. offices, hospitals or residences), where appearance patterns are repetitive and captures from different locations provide very similar images. To overcome this issue, in this paper, we present a method that integrates multisensory information from an RGB-D camera, a LiDAR and motor encoders. Our approach simultaneously utilizes spatial consistency from a reference topological map and temporal consistency from time-series observations. Inspired by human cognitive perception, we define a two layered topological architecture that encompasses both coarse information of object distributions and structural information with some metric references. Categories of common objects in the environments, such as fire extinguishers or doors, are used as natural beacons. We evaluated our approach in two real-world buildings based on a multi-aisle structure with corridors of very similar appearance. Results demonstrated accurate localization despite the high level of symmetry of the scenario, and how ambiguity was significantly reduced as the agent progressed along its trajectory.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.