Abstract

In this paper, a motion-model-free monocular SLAM algorithm is proposed for simultaneous localization and mapping of a robotic system. A monocular image sequence captured by a calibrated camera is the only input to the system, and robust and accurate frame-to-frame camera poses and a 3D map of the environment can be estimated automatically by the approach. The pose estimation method takes advantage of the epipolar geometry in structure from motion (SfM) to recover the rotation matrix and translation term of the camera, and one 3D reference point is used to recover the camera's translation distance. Then, a random sampling consensus (RANSAC) framework is employed to find the robust rotation matrix and translation vector, and a nonlinear optimization algorithm is applied to optimize the estimated rotation matrix and translation vector by minimizing the projection errors. Finally, a local bundle adjustment algorithm is performed to optimize the results. Extensive experimental evaluations demonstrate the effectiveness of the proposed monocular SLAM algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call