Abstract
Visual localization has been well studied in recent decades and applied in many fields as a fundamental capability in robotics. However, the success of the state of the arts usually builds on the assumption that the environment is static. In dynamic scenarios where moving objects are present, the performance of the existing visual localization systems degrades a lot due to the disturbance of the dynamic factors. To address this problem, we propose a novel sparse motion removal (SMR) model that detects the dynamic and static regions for an input frame based on a Bayesian framework. The similarity between the consecutive frames and the difference between the current frame and the reference frame are both considered to reduce the detection uncertainty. After the detection process is finished, the dynamic regions are eliminated while the static ones are fed into a feature-based visual simultaneous localization and mapping (SLAM) system for further visual localization. To verify the proposed method, both qualitative and quantitative experiments are performed and the experimental results have demonstrated that the proposed model can significantly improve the accuracy and robustness for visual localization in dynamic environments. Note to Practitioners —This article was motivated by the visual localization problem in dynamic environments. Visual localization is well applied in many robotic fields such as path planning and exploration as the basic capability for a mobile robot. In the GPS-denied environments, one robot needs to localize itself through perceiving the unknown environment based on a visual sensor. In real-world scenes, the existence of the moving objects will significantly degrade the localization accuracy, which makes the robot implementation unreliable. In this article, an SMR model is designed to handle this problem. Once receiving a frame, the proposed model divides it into dynamic and static regions through a Bayesian framework. The dynamic regions are eliminated, while the static ones are maintained and fed into a feature-based visual SLAM system for further visual localization. The proposed method greatly improves the localization accuracy in dynamic environments and guarantees the robustness for robotic implementation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Automation Science and Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.