Abstract

In this paper, we study vision-based localization for robots. We anticipate that numerous mobile robots will serve or interact with humans in indoor scenarios such as healthcare, entertainment, and public service. Such scenarios entail accurate and scalable indoor visual robot localization, the subject of this work. Most existing vision-based localization approaches suffer from low localization accuracy and scalability issues due to visual environmental features’ limited effective range and detection accuracy. In light of infrastructural cameras’ wide indoor deployment, this paper proposes BRIDGELOC, a novel vision-based indoor robot localization system that integrates both robots’ and infrastructural cameras. BRIDGELOC develops three key technologies: robot and infrastructural camera view bridging, rotation symmetric visual tag design, and continuous localization based on robots’ visual and motion sensing. Our system bridges robots’ and infrastructural cameras’ views to accurately localize robots. We use visual tags with rotation symmetric patterns to extend scalability greatly. Our continuous localization enables robot localization in areas without visual tags and infrastructural camera coverage. We implement our system and build a prototype robot using commercial off-the-shelf hardware. Our real-world evaluation validates BRIDGELOC’s promise for indoor robot localization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.