Abstract

In an effort to facilitate lighting-invariant exploration, this paper presents an appearance-based approach using 3D scanning laser-rangefinders for two core visual navigation techniques: visual odometry (VO) and visual teach and repeat (VT&R). The key to our method is to convert raw laser intensity data into greyscale camera-like images, in order to apply sparse, appearance-based techniques traditionally used with camera imagery. The novel concept of an image stack is introduced, which is an array of azimuth, elevation, range, and intensity images that are used to generate keypoint measurements and measurement uncertainties. Using this technique, we present the following four experiments. In the first experiment, we explore the stability of a representative keypoint detection/description algorithm on camera and laser intensity images collected over a 24 h period outside. In the second and third experiments, we validate our VO algorithm using real data collected outdoors with two different 3D scanning laser-rangefinders. Lastly, our fourth experiment presents promising preliminary VT&R localization results, where the teaching phase was done during the day and the repeating phase was done at night. These experiments show that it possible to overcome lighting sensitivity encountered with cameras, yet continue to exploit the heritage of the appearance-based visual odometry pipeline.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.