Abstract Feature point extraction plays a key role in visual simultaneous localization and mapping (SLAM) systems. And it remains a major challenge to accurately select static feature points in a complex dynamic environment. To address this issue, this paper proposes an RGB-D SLAM method, referred to as DE-RGBD SLAM, which optimizes feature selection by integrating depth information and effectively utilizes depth data and multi-view geometric information to achieve localization and navigation for mobile robots in dynamic environments. Firstly, the method analyzes prominent feature regions in the image based on colour and depth information captured by an RGB-D camera. It sets adaptive FAST corner detection thresholds according to the grayscale information of these regions while masking other areas. Next, the method obtains in-depth information on the detected feature points in the current frame. It combines their pixel coordinates in the image coordinate system to determine the presence of redundant feature points. Notably, the method can detect some dynamic feature points between consecutive frames. Subsequently, in the camera coordinate system, the method compares the depth information of feature points in the depth image with the epipolar depth estimates derived from the essential matrix to determine whether the features are static and eliminate dynamic feature points. This approach significantly enhances the reliability of static feature points. Finally, the accuracy and robustness of the proposed method are validated through experiments conducted on the public TUM dataset and real-world scenarios compared to state-of-the-art visual SLAM systems.
Read full abstract