Abstract

This paper surveys visual navigation methods robust to illumination changes. Visual navigation, which involves estimating robot pose and reconstructing the surrounding environment, has been the focus of numerous research in the field of autonomous mobile vehicles. Initially, the visual navigation was divided into a localization problem and a mapping problem, and independent attempts were made to solve them individually. Gradually, due to the close dependence between the two problems, they have been integrated into the visual simultaneous localization and mapping (vSLAM) problem. vSLAM has developed from filter-based methods to optimization-based methods with high accuracy in real-time. However, such vision-based navigation systems perform data association and pose estimation, assuming that the illumination of environments does not change, which is not guaranteed in the real world. Therefore, research efforts are being made to make visual navigation systems robust to illumination changes of the external environments to be exploited in various environments. In this paper, we survey the state-of-the-art research related to visual navigation robust to illumination changes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call