Abstract

Simultaneous Localization and Mapping (SLAM) combined with visual and inertial measurements has attained considerable consideration in the Robotics and Computer Vision society. Nevertheless, balancing between real-time needs and precision could be a difficult challenge. Thus, a new tightly-coupled visual-inertial concurrent localization and mapping approach is proposed with precise and real-time motion estimating and map reconstruction capabilities. The nonlinear optimization is based on the concept that the frontend and backend in the visual-inertial SLAM (VISLAM) system can enhance one another. Moreover, a new inertial measurement unit (IMU) initialization approach is employed for rapid calculation of the scale, the gravity orientation, velocity, and gyroscope and accelerometer biases with high precision. Besides, precise motion estimation of the frontend could be provided, which improves the backend optimization due to achieving a more precise initial state for the backend. Also, feedback-based relocalization and continued SLAM frameworks are designed for autonomous robot navigation or SLAM. The accuracy of the presented VISLAM system is investigated via experiments performed on the public EuRoC dataset and actual environments. According to the experiments, the presented VISLAM system is more accurate with lower computational cost compared with existing VISLAM systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call