Abstract

Pose estimation and 3D environment reconstruction are crucial for autonomous navigation in mobile robotics. Robust dense visual odometry based on a RGB-D sensor uses all pixels to estimate frame-to-frame motion by minimizing the photometric and geometric error. 3D coordinates of each pixel are calculated necessarily with its corresponding depth. However, depths of some pixels near object boundaries from RGB-D sensors are not accurate. The general robust dense visual odometry does not consider depth noise impact for photometric error and geometric error. In this paper, we construct uncertainties of photometric error and geometric error with depth noise and point out depth noise near object boundaries can significantly affect the result of motion estimation. We present a modified robust dense visual odometry with boundary pixel suppression. Publicly available benchmark datasets are employed to evaluate our system, and results showed that our method achieves higher accuracy compared with the state-of-the-art Dense Visual Odometry (DVO).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.