Abstract

In view of traditional point-line feature visual inertial simultaneous localization and mapping (SLAM) system, which has weak performance in accuracy so that it cannot be processed in real time under the condition of weak indoor texture and light and shade change, this paper proposes an inertial SLAM method based on point-line vision for indoor weak texture and illumination. Firstly, based on Bilateral Filtering, we apply the Speeded Up Robust Features (SURF) point feature extraction and Fast Nearest neighbor (FLANN) algorithms to improve the robustness of point feature extraction result. Secondly, we establish a minimum density threshold and length suppression parameter selection strategy of line feature, and take the geometric constraint line feature matching into consideration to improve the efficiency of processing line feature. And the parameters and biases of visual inertia are initialized based on maximum posterior estimation method. Finally, the simulation experiments are compared with the traditional tightly-coupled monocular visual–inertial odometry using point and line features (PL-VIO) algorithm. The simulation results demonstrate that the proposed an inertial SLAM method based on point-line vision for indoor weak texture and illumination can be effectively operated in real time, and its positioning accuracy is 22% higher on average and 40% higher in the scenario that illumination changes and blurred image.

Highlights

  • According to the coupling mode of vision and IMU, visual inertial simultaneous localization and mapping (SLAM) system can be divided into two types: loosely coupled and tightly coupled

  • This study focuses on the point and line feature extraction and matching algorithms of data preprocessing module: (1)

  • This paper presents a line feature matching method based on geometric constraints

Read more

Summary

Introduction

Visual simultaneous localization and mapping (SLAM) [1] algorithm is regarded as the core technique to achieve a mobile robot’s autonomous operation. Its appearance enables mobile robots to obtain their own state and surrounding environmental information through corresponding sensors in indoor environments with weak GPS signals, so as to realize map construction and environmental cognition and complete specific tasks autonomously. According to the coupling mode of vision and IMU, visual inertial SLAM system can be divided into two types: loosely coupled and tightly coupled. The loosely coupled method is the result of motion estimation independently by integrating the two modules of vision and IMU, such as [2,3], while the tightly coupled method uses the original data of camera and IMU to combine for optimization, which makes it easier to obtain globally consistent estimation results

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call