Abstract

Simultaneous localization and mapping is a crucial problem for mobile robots, which estimates the surrounding environment (the map) and, at the same time, computes the robot location in it. Most researchers working on simultaneous localization and mapping focus on localization accuracy. In visual simultaneous localization and mapping , localization is to calculate the robot’s position relative to the landmarks, which corresponds to the feature points in images. Therefore, feature points are of importance to localization accuracy and should be selected carefully. This article proposes a feature point selection method to improve the localization accuracy. First, theoretical and numerical analyses are conducted to demonstrate the importance of distribution of feature points. Then, an algorithm using flocks of features is proposed to select feature points. Experimental results show that the proposed flocks of features selector implemented in visual simultaneous localization and mapping enhances the accuracy of both localization and mapping, verifying the necessity of feature point selection.

Highlights

  • Simultaneous localization and mapping (SLAM) is one of the key technologies in robotics

  • In terms of the sensors used to perceive surroundings, SLAM can be classified into sonar based, laser based, and vision based with auxiliary sensors such as Inertial measurement unit (IMU), compass, infrared, and depth sensors.[9,10,11,12]

  • In a visual SLAM system, one or more cameras can be used as sensors,[14] whereas we focus on one camera that is called monoSLAM.[1]

Read more

Summary

Introduction

Simultaneous localization and mapping (SLAM) is one of the key technologies in robotics. Most researchers working on visual SLAM pay attention to indoor environments,[1,2,3,4] while some works dealing with the airborne applications.[5,6] Recently, SLAM is applied in underwater scenarios.[7,8] In terms of the sensors used to perceive surroundings, SLAM can be classified into sonar based, laser based, and vision based with auxiliary sensors such as Inertial measurement unit (IMU), compass, infrared, and depth sensors.[9,10,11,12] Thanks to the development of image processing and stereo vision, visual SLAM1–3 has been rapidly developing and been applied in a wide range of fields, such as argument reality,[1] computer games,[4] and humanoid robots.[13] In a visual SLAM system, one or more cameras can be used as sensors,[14] whereas we focus on one camera that is called monoSLAM.[1] Figure 2 shows the output of visual SLAM including the feature map and the location of the camera

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call