Abstract

Visual odometry (VO) is a fundamental technique for many robotics and augmented reality (AR) applications. However, most existing RGB-D VO systems suffer from large performance degradation when large occlusions are present and/or a large portion of depth values are invalid due to the limited range of an RGB-D camera, prohibiting the usage of most systems in practical applications. To address above two problems, we present RGB-D DSO, an RGB-D direct sparse odometry with the core part being sliding-window optimization with occlusion removal and a depth refinement module. Occlusion removal excludes negative effects arising from occluded objects when minimizing the final energy function for camera pose tracking. Depth refinement ensures sufficient valid depth values uniformly distributed for the depth map of a keyframe. Experimental results on three public datasets demonstrate that our method achieves smaller tracking error than most existing state-of-the-art methods. Meanwhile, our system takes only 21.93 ms to track a frame, which is faster than most existing methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.