Abstract

This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

Highlights

  • Sensors are widely used in several scientific and technical fields like robotics enabling the perception of the environment and its elements surrounding the robotic systems

  • The simultaneous localization and mapping (SLAM) problem states how a mobile robotic device can operate in an a priori unknown environment by means of only onboard sensors to simultaneously build a map of its surroundings and use it to track its position

  • Assuming that the other camera device the ―free camera‖, or Cf) with known pose is near to the robotic camera performing SLAM ―SLAM camera‖, or Cs), joining observations from both cameras allow performing stereo-like estimation when their fields of view overlap

Read more

Summary

Introduction

Sensors are widely used in several scientific and technical fields like robotics enabling the perception of the environment and its elements surrounding the robotic systems. This has led to the development of several sensor-based problems within the field, such as simultaneous localization and mapping (SLAM). The SLAM problem states how a mobile robotic device can operate in an a priori unknown environment by means of only onboard sensors to simultaneously build a map of its surroundings and use it to track its position. The SLAM is one of the most important problems to solve in robotics heavily related with sensors and its applications. Many approaches have been developed to deal with the SLAM problem, based on a wide selection of sensors and combinations of them. Exteroceptive sensors can be used to solve both mapping and localization, while proprioceptive sensors are only able to deal with localization

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.