Abstract
Geometry-based Visual Odometry (VO) techniques are renowned in the fields of computer vision and robotics. They use methods from multi-view geometry to estimate camera motion from visual data obtained from one or more cameras. Tracking the camera motion precisely between different views is dependent on the correct estimation of correspondences between salient points of the views. In practice, geometry-based methods are found to be quite effective but do not perform well in challenging cases due to tracking failures caused by abrupt motion, occlusions, textureless and low-light scenes, etc. On the contrary, end-to-end learning from visual data using deep neural networks is an emerging area of research and deals with challenging cases successfully. Despite being computationally expensive, these methods do not outperform their counterparts in conditions favorable to geometry-based methods. Considering these facts in this work, our goal is to integrate deep descriptors to improve the correspondence between image points for tracking in a traditional geometry-based VO pipeline. We propose a simple stereo VO pipeline inspired by popular techniques found in the literature. Two conventional and four deep descriptors have been used in our experiments conducted on various image sequences of the challenging KITTI benchmark dataset. We have determined empirically that deep descriptors can effectively minimize drift in the VO estimates and produce better camera trajectories. The experimental results on the KITTI dataset demonstrate that our VO method performs at par with the state-of-the-art works reported in the literature.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.