Abstract

Visual re-localization has become one of the key technologies for long-term autonomous robots. Existing methods, mostly focusing on addressing day-night, weather, and seasonal changes, are not applicable in indoor scenarios. At the same time, the layouts of objects in indoor scenes are highly dynamic over time due to human interactions with the environment, which makes indoor re-localization challenging. This letter presents a novel indoor visual re-localization method for long-term autonomous robots. First, a scene graph model is proposed, incorporating object-level features and semantic relationships, which overcomes the influence of dynamic objects by understanding the interactions among objects. Then, a visual re-localization method is developed based on the proposed scene graph model. It adopts graph matching technologies to incorporate pairwise object interactions as important features for re-localization, and designs a feature reweighting strategy to further reduce the impact of outliers in dynamic scenes. The proposed re-localization method has been verified in both photorealistic simulation environments and real-world scenarios. The results show that our approach exhibits higher robustness to diverse object changes and performs comparably to the state-of-the-art methods when illumination changes occur.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.