Abstract

In the automatic navigation robot field, robotic autonomous positioning is one of the most difficult challenges. Simultaneous localization and mapping (SLAM) technology can incrementally construct a map of the robot’s moving path in an unknown environment while estimating the position of the robot in the map, providing an effective solution for robots to fully navigate autonomously. The camera can obtain corresponding two-dimensional digital images from the real three-dimensional world. These images contain very rich colour, texture information, and highly recognizable features, which provide indispensable information for robots to understand and recognize the environment based on the ability to autonomously explore the unknown environment. Therefore, more and more researchers use cameras to solve SLAM problems, also known as visual SLAM. Visual SLAM needs to process a large number of image data collected by the camera, which has high performance requirements for computing hardware, and thus, its application on embedded mobile platforms is greatly limited. This paper presents a parallelization method based on embedded hardware equipped with embedded GPU. Use CUDA, a parallel computing platform, to accelerate the visual front-end processing of the visual SLAM algorithm. Extensive experiments are done to verify the effectiveness of the method. The results show that the presented method effectively improves the operating efficiency of the visual SLAM algorithm and ensures the original accuracy of the algorithm.

Highlights

  • In order to achieve fully autonomous work in an unknown environment, mobile robots must solve two basic problems of positioning themselves and perception of the environment

  • The front end is at the lower level in the visual Simultaneous localization and mapping (SLAM) system, known as visual odometry (VO) [7]

  • Visual SLAM needs to process a large number of image data, so the performance requirements of computing hardware are relatively high, which limits the application of visual SLAM on embedded platforms

Read more

Summary

Introduction

In order to achieve fully autonomous work in an unknown environment, mobile robots must solve two basic problems of positioning themselves and perception of the environment. The vision sensors input the images, and the system performs feature extraction and matching on these input images at the front end, roughly estimates the position of the feature points and the robot, transfers the estimated result to the back end, and executes graph optimization to get a more accurate result In this way, it is possible to locate and build a map and at the same time transfer the optimized result to the closed-loop inspection to eliminate the accumulated error of the robot moving for a long time and use the result for tracking. We combine the selected scheme and the computing performance of embedded hardware and select the most appropriate GPU parallel computing method to optimize and improve the visual SLAM processing performance and operating efficiency and ensure good positioning accuracy.

Front-End Visual Odometry
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call