Abstract

This paper presents a novel framework for Human Robot Interaction (HRI) using marker-less Augmented Reality (AR). Unlike marker-based AR, marker-less AR does not require the environment to be instrumented with special markers and so it works favorably for unknown/unprepared environments. Current state-of-the-art visual SLAM approaches like PTAMM (Parallel Tracking and Multiple Mapping) achieve this with constrained motion models within local co-ordinate systems. Our framework relaxes motion model constraints enabling a wider range of camera movements to be robustly tracked and extends PTAMM with a series of linear transformations. The linear transformations enable AR markers to be seamlessly placed and tracked within a global co-ordinate system of any size. This allows us to place markers globally and view them from any direction and perspective, even when returning to the markers from a different direction or perspective. We report on the model's performance and show how the model can be applied to help humans interact with robots. In this paper we look at how they can assist robot navigation tasks. 

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.