In the evolving landscape of autonomous navigation, traditional Visual Simultaneous Localization and Mapping (SLAM) systems often encounter challenges in dynamic environments, primarily due to their reliance on assumptions of static surroundings. In response to these limitations, we introduce ARD-SLAM, a groundbreaking approach to dynamic SLAM that innovatively combines global dense optical tracking with sophisticated geometric methodologies. The core innovation of ARD-SLAM lies in its dynamic object identification technique, which harmoniously integrates geometric motion information with prospective motion data. This integration facilitates effective segmentation of moving objects, thereby substantially diminishing their impact on camera ego-motion estimation. ARD-SLAM is further enhanced by an advanced multi-view geometry method that emphasizes the selection of well-matched feature points. This approach is instrumental in efficiently managing dynamic scenarios while also reducing computational load. Rigorous testing on the TUM RGB-D and Bonn RGB-D benchmark datasets has established ARD-SLAM's superiority over established techniques like ORB-SLAM2/3, DynaSLAM, SD-SLAM, DGS-SLAM, and OVD-SLAM. Notably, ARD-SLAM achieves a substantial average reduction in Absolute Trajectory Error (ATE) by 86.1% and in Relative Pose Error (RPE) by 88.0% compared to ORB-SLAM3. The results from the Bonn RGB-D Dataset further underscore ARD-SLAM's effectiveness. Compared to other SLAM methods, ARD-SLAM shows remarkable improvements: 37.8% and 66.4% over DynaSLAM, 41.2% and 73.1% over DGS-SLAM, and 48.9% and 79.7% over OVD-SLAM in ATE and RPE metrics, respectively. This robust performance in dynamically changing environments solidifies ARD-SLAM as a significant advancement in SLAM technology, offering a more precise and adaptable solution for the complex challenges of real-world autonomous navigation.
Read full abstract