Abstract
Visual navigation technology enables the pose of a robot to be estimated and the surrounding environment to be perceived using a vision sensor mounted on the robot. This technology is essential to autonomous driving systems in unmanned mobile vehicles and has been actively researched in visual odometry (VO) and visual simultaneous localization and mapping (vSLAM). Generally, the vision-based navigation algorithms perform data association and pose estimation under the assumption that the brightness of surrounding environments does not change over time and that the scene obtained from vision sensors is static. However, in realistic industrial sites or urban environments, the brightness of the environment varies, and dynamic objects such as workers and cars are present. These conditions may lead to a decline in the reliability and performance of visual navigation. Research on robust visual navigation under environmental variations, such as illumination changes and dynamic circumstances, has sought to solve this problem. This study proposes a state-of-the-art robust visual navigation system that is robust to illumination changes and dynamic environments. Moreover, our analysis and classification is based on the methodology used in each robust visual navigation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Institute of Control, Robotics and Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.