Abstract

Localization is among the most important prerequisites for autonomous navigation. Vision-based systems have got great attention in recent years due to numerous camera advantages over other sensors. Reducing the computational burden of such systems is an active research area making them applicable to resource-constrained systems. This paper aims to propose and compare a fast monocular approach, named ARM-VO, with two state-of-the-art algorithms, LibViso2 and ORB-SLAM2, on Raspberry Pi 3. The approach is a sequential frame-to-frame scheme that extracts a sparse set of well-distributed features and tracks them in upcoming frames using Kanade–Lucas–Tomasi tracker. A robust model selection is used to avoid degenerate cases of fundamental matrix. Scale ambiguity is resolved by incorporating known camera height above ground. The method is open-sourced [ https://github.com/zanazakaryaie/ARM-VO ] and implemented in ROS mostly using NEON C intrinsics while exploiting the multi-core architecture of the CPU. Experiments on KITTI dataset showed that ARM-VO is 4–5 times faster and is the only method that can work almost real-time on Raspberry Pi 3. It achieves significantly better results than LibViso2 and is ranked second after ORB-SLAM2 in terms of accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.