Abstract

The state of art Visual SLAM is going from sparse feature to semi-dense feature to provide more information for environment perception, whereas the semi-dense methods often suffer from inaccurate depth map estimation and are easy to become instable for some real-world scenarios. The paper proposes to extend the ORB-SLAM2 framework, which is a robust sparse feature SLAM system tracking camera motion with map maintenance and loop closure, by introducing the unified spherical camera model and the semi-dense depth map. The unified spherical camera model fits the omnidirectional camera well, therefore the proposed Visual SLAM system could handle fisheye cameras which are commonly installed on modern vehicles to provide larger perceiving region. In addition to the sparse corners features the proposed system also utilizes high gradient regions as semi-dense features, thereby providing rich environment information. The paper presents in detail how the unified spherical camera model and the semi-dense feature matching are fused with the original SLAM system. Both accuracies of camera tracking and estimated depth map of the proposed SLAM system are evaluated using real-world data and CG rendered data where the ground truth of the depth map is available.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.