Abstract

The performances of traditional visual odometry algorithms may degenerate when a scene contains dynamic objects. In this paper, we propose a novel spatiotemporal visual odometry for dynamic indoor environments by using RGB-D cameras. First, to improve the data association, the complete ground plane is detected and used in the optimization function of coarse pose estimation. Then, the spatial information and temporal information are fused by an undirected network to improve the segmentation of the moving objects. Last, the pose is computed accurately via a coarse-to-fine strategy. Experimental results that demonstrate the performance of the proposed method are presented, and the factors that affect the measurement accuracy are analyzed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call