Abstract

Localization of vehicles in a 3D environment is a basic task in autonomous driving. In the low-light environments, it is difficult to navigate independently using a visual odometry for autonomous driving. The main reason for this challenge is the blurred images in the scenes with insufficient illumination. Although numerous works focused on this issue, it still has a number of inherent drawbacks. In this paper, we propose a lightweight stereo visual odometry system for navigation of autonomous vehicles in low-light situations. Contrary to the existing recovery methods, we aim to divide the captured image into the illumination image as well as the reflectance image and only estimate the illumination one, where the enhanced map of the low-light image is acquired by using the retinex theory. In addition, we further utilize a simplified and rapid feature detection scheme, which reduces the computation time by about 85% but maintaining the matching accuracy similar to that of ORB features. Finally, the experiments show that our average memory consumption of our proposed method is much less than the conventional algorithm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.