We propose a robust geometry-constrained scale estimation approach for monocular visual odometry, which takes the camera height as an absolute reference. Visual odometry is an essential module for robot self-localization and autonomous navigation in unexplored environments. Scale recovery is an indispensable requirement for monocular visual odometry, since it compensates for the metric information lost by a single camera and helps to reduce the scale drift. When the camera height is considered the absolute reference, the precision of scale recovery depends on the accuracy of the road point selection and road geometric model calculation. However, most of the previous approaches solve these two problems sequentially, and their road point selection is based on the color model of the road or prior-knowledge-based fixed region. In this paper, we propose combining and iteratively solving these two problems. We adopt the geometric model, instead of the color model, of the road to select the road points. Furthermore, the selected road feature points are used to estimate the road model, which limits the road point selection. In detail, we segment our feature points with Delaunay triangulation and select road points based on the depth consistency and road model consistency. The experiments on the KITTI dataset show that our method achieves the best performance among state-of-the-art monocular visual odometry methods.
Read full abstract