Abstract
Aim of intense research in the field computational vision, dense 3D reconstruction achieves an important landmark with first methods running in real time with millimetric precision, using RGBD cameras and GPUs. However, these methods are not suitable for low computational resources. The goal of this work is to show a method of visual odometry using regular cameras, without using a GPU. The proposed method is based on techniques of sparse Structure from Motion (SFM), using data provided by dense 3D reconstruction. Visual odometry is the process of estimating the position and orientation of an agent (a robot, for instance), based on images. This paper compares the proposed method with the odometry calculated by Kinect Fusion. Odometry provided by this work can be used to model a camera position and orientation from dense 3D reconstruction.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.