Abstract

Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization.

Highlights

  • Visual Simultaneous Localization and Mapping (V-SLAM) is an important topic for autonomous robot navigation

  • This paper focuses on the forward filtering part of SLAM and proposes new ways to solve monocular visual odometry (VO) and iterative closest point (ICP) in a tightly-coupled Kalman filters (KF)

  • The rigidly-attached Kinect and inertial measurement unit (IMU) were moved in front of it to ensure sufficient excitations about all three axes

Read more

Summary

Introduction

Visual Simultaneous Localization and Mapping (V-SLAM) is an important topic for autonomous robot navigation. It is often branded as Structure from Motion (SFM) in computer vision and photogrammetric bundle adjustment in photogrammetry; two disciplines that study the 3D reconstruction of an object/scene while inferring the 6 degrees-of-freedom movement of the optical system, albeit for different purposes. The robotics community had been adopting computer vision techniques to address the SLAM problem; this research followed the photogrammetry methods. The term SLAM was first coined in 1995, but its concept and probability basis dates back to the 1980s [2]. The theoretical concept of SLAM is a matured subject [3] and over the years many

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call