Abstract

The increasing interest in developing robots capable of navigating autonomously has led to the necessity of developing robust methods that enable these robots to operate in challenging and dynamic environments. Visual odometry (VO) has emerged in this context as a key technique, offering the possibility of estimating the position of a robot using sequences of onboard cameras. In this paper, a VO algorithm is proposed that achieves sub-pixel precision by combining optical flow and direct methods. This approach uses only a downward-facing, monocular camera, eliminating the need for additional sensors. The experimental results demonstrate the robustness of the developed method across various surfaces, achieving minimal drift errors in calculation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.