Abstract
Cameras have emerged as the dominant sensor modality for localization and mapping in three-dimensional, unstructured terrain, largely due to the success of sparse, appearance-based techniques, such as visual odometry. However, the Achilles' heel for all camera-based systems is their dependence on consistent ambient lighting, which poses a serious problem in outdoor environments that lack adequate or consistent light, such as the Moon. Actively illuminated sensors on the other hand, such as a light detection and ranging (lidar) device, use their own light source to illuminate the scene, making them a favourable alternative in light-denied environments. The purpose of this paper is to demonstrate that the largely successful appearance-based methods traditionally used with cameras can be applied to laser-based sensors, such as a lidar. We present two experiments that are vital to understanding and enabling appearance-based methods for lidar sensors. In the first experiment, we explore the stability of a representative keypoint detection and description algorithm on both camera images and lidar intensity images collected over a 24 hour period. In the second experiment, we validate our approach by implementing visual odometry based on sparse bundle adjustment on a sequence of lidar intensity images.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.