Abstract

Abstract. One of the most popular research areas is low-cost navigation and positioning systems for autonomous vehicles. Determining a vehicle's position within a lane is critical for achieving high automation. Vehicle navigation and positioning relied heavily on the Global Navigation Satellite System (GNSS) service in open-sky scenarios. Nonetheless, GNSS signals were easily degraded due to various environmental situations such as urban canyons caused by multi-path effects and Non-Line-of-Sight (NLOS) issues. To perform robustly in complex scenarios, sensor fusion is the most common solution. The following paper presents a radar visual odometry framework to improve the lack of scale factors for monocular cameras and poor angular resolution for radar. The framework is based on the characteristics of camera and radar sensors which have complementary advantages in each other. The results show that the proposed framework can be used to estimate general 2D motion in an indoor environment and correct the unknown scale factor of Monocular Visual Odometry in a real-world setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call