Abstract

One of the essential tasks required from a mobile robot is the autonomous and safe navigation of its working environment. However, in many cases, a model of the environment or map is not available to execute this task. Indeed, navigation requires a permanent estimation of the location for a map, which is not available for unknown environments. In such a scenario, the robot must have extended capabilities to solve, concurrently, the problems of localization and mapping. The simultaneous solution of these two problems is known as SLAM (simultaneous localization and mapping) and is a complex problem, not yet fully solved by the scientific community. This is due to the fact that localization requires a map that is not yet available since it is still under construction. In turn, the elaboration of a map requires the estimation of the robot’s location. This is the reason why SLAM has been categorized as similar to the chicken and egg problem. In the case of a robot facing an unknown environment, it would be something like what to solve first, localization or mapping? The answer to this question is that the robot will have to solve both problems at the same time. This article presents a study of some of the most representative open source visual SLAM (vSLAM) methods, beginning from an analysis of their characteristics and presenting criteria selection for an experimental design that allows contrasting their advantages and disadvantages. Two of the most representative algorithms for solving vSLAM were considered (RTAB-Map and ORB-SLAM2). The experiments were validated with a robotic system designed for this purpose, which is fully compatible with ROS (robot operating system).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call