An autonomous vehicle can simultaneously map its environment and identify its own position by employing a technique called “Simultaneous Localisation And Mapping” (SLAM). Autonomous mobility requires identifying the locations of adjacent landmarks and objects, as well as the vehicle’s position, using an appropriate technique. Monocular SLAM systems often face challenges related to depth perception and scale ambiguity, leading to trajectory drift over time. In contrast, Stereo SLAM systems utilize dual cameras to overcome these limitations. The purpose of this work is to assess how well visual SLAM systems perform by contrasting trajectory estimates with ground truth information obtained from simulations. The findings indicate that stereo visual SLAM algorithms offer more accurate camera trajectory estimations than monocular SLAM, making them a preferable choice for applications demanding precise camera localization and mapping in autonomous vehicles.
Read full abstract