Abstract

SUMMARYThis paper introduces a new RGBD-Simultaneous Localization And Mapping (RGBD-SLAM) based on a revisited keyframe SLAM. This solution improves the localization by combining visual and depth data in a local bundle adjustment. Then, it presents an extension of this RGBD-SLAM that takes advantage of a partial knowledge of the scene. This solution allows using a prior knowledge of the 3D model of the environment when this latter is available which drastically improves the localization accuracy. The proposed solutions called RGBD-SLAM and Constrained RGBD-SLAM are evaluated on several public benchmark datasets and on real scenes acquired by a Kinect sensor. The system works in real time on a standard central processing units and it can be useful for certain applications, such as localization of lightweight robots, UAVs, and VR helmet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call