Abstract

Semantic interpretation of regions or entities is increasingly attracting the attention of scholars, owing to its vast applicability in several disciplines. In this context, modern autonomous systems are capable to semantically recognize and separate entities from camera measurements, while effectively interprete and interact with their environment in a higher level. Extending this notion, the semantic representation of the surroundings, based on satellite and ground-level data, is considered a fundamental property for self-localization, especially in the absence of any georeferencing signal. Keeping that in mind, in this article, we present a robust algorithm to locate the position of an autonomous vehicle within a georeferenced map using graph-based descriptors with semantic and metric information from both its memory and query measurements. In particular, an enhanced prerecorded satellite map is processed to compute semantic memories, whilst ground-level query views are used as a means to identify similarities and extrapolate the location of a moving vehicle. The above components are evaluated under an extensive set of experiments, revealing the robustness and accuracy of our final robot localization system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.