This paper proposes an RGB-D visual odometry method that leverages point, line, plane features and Manhattan structures to achieve robust frame tracking and precise pose estimation, especially in textureless scenes. A validation method is introduced that ensures accurate frame-to-frame rotation estimation by comparing rotation angles computed from multiple Manhattan structures. Depth verification methods involving parameter fitting and outlier removal for point, line, and plane features are implemented by investigating the covariance of sensor depth measurements. We also employ local bundle adjustment in the local mapping thread to refine keyframe poses and landmarks. Comprehensive ablation studies confirm the effectiveness of our contributions. Experimental results on public datasets demonstrate that our method achieves obvious advantages in accuracy and robustness while maintaining real-time performance.
Read full abstract