Abstract

A novel RGB-D visual odometry method for dynamic environment is proposed. Majority of visual odometry systems can only work in static environments, which limits their applications in real world. In order to improve the accuracy and robustness of visual odometry in dynamic environment, a Feature Regions Segmentation algorithm is proposed to resist the disturbance caused by the moving objects. The matched features are divided into different regions to separate the moving objects from the static background. The features in the largest region which belong to the static background are used to estimate the camera pose finally. The effectiveness of our visual odometry method is verified in a dynamic environment of our lab. Furthermore, an exhaustive experimental evaluation is conducted on benchmark datasets including static environments and dynamic environments compared with the state-of-art visual odometry systems. The accuracy comparison results show that the proposed algorithm outperforms those systems in large scale dynamic environments. Our method tracks the camera movement correctly while others failed. In addition, our method can give the same good performances in static environment. Experiments demonstrate that the proposed RGB-D visual odometry can obtain accurate and robust estimation results in dynamic environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.