Abstract

Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS). This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP) to obtain a robust global image description (D-CSLBP). In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH) was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.

Highlights

  • One of the prerequisites of navigation issue is to make the vehicle or robot able to reliably determine its position within its environment

  • Fast Appearance Based Mapping (FAB-MAP) approach consists of matching the appearance of current scene to the same past visited place by converting the images into bag-of-words representations built on local features such as SIFT or SURF

  • The multifeature concatenates the DCSLBP and histograms of oriented gradients (HOG) features together to take the advantage of texture, depth, and shape information

Read more

Summary

Introduction

One of the prerequisites of navigation issue is to make the vehicle or robot able to reliably determine its position within its environment. FAB-MAP approach consists of matching the appearance of current scene to the same (similar) past visited place by converting the images into bag-of-words representations built on local features such as SIFT or SURF. In local feature based place recognition approaches, image representation is defined as collection of local features which contribute to their robustness when faced with local image variations as well as from discriminative power of their descriptors. Most of these works exhibit a high computation cost or complex feature extraction for image matching [5, 6]. Few works pay attention to the depth information for visual place recognition

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call