Abstract

In robotics and mapping, prior knowledge of an environment can be included as virtual assets to a simultaneous localization and mapping (SLAM) solution. Borrowing the concept of affordances from robotic manipulation (i.e. virtual/interactive object models/primitives), this work addresses the fundamental duality in discrepancies between virtual and physical structures for localization and mapping. We propose a multimodal/non-Gaussian solution as a fundamental mechanism to leverage navigation-affordance assets during the localization and mapping process while simultaneously identifying any mismatches from the physical object. This allows the localization and mapping state-estimate more robust access to non-conventional and imperfect prior information about the environment, while computationally identifying assumed model discrepancies from imperfect sensor data. We use non-Gaussian factor graphs as modeling language to incorporate navigation-affordances with multi-sensor data similar to SLAM methods. We illustrate the approach with synthesized and real-world data from the construction industry where digital assets (such as drawings or models) are good proxies for how navigation-affordances can be generated and used in general.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.