Abstract

Self-localization is a fundamental task for connected autonomous vehicles. This paper proposes a visual self-localization method by matching images from an in-vehicle monocular camera with those from a visual map. We use the classic ORB (Oriented FAST and Rotated BRIEF) method to encode both holistic and local features of an input image. In this method, each input image is normalized into a 63 × 63 image patch. The patch center is set as the ORB point position and the corresponding ORB descriptor is used as holistic feature. Besides, we also extract local features by representing all the ORB descriptors extracted from the original input image with visual words by using the classic Bag-of-Words (BOW) method. Finally, we extend the use of hybrid K-nearest neighbor (H-KNN) to fuse ORB-encoded holistic and local features for position or site recognition. The proposed self-localization method was validated by using actual images collected along a road segment of 3.2 km in Wuhan City, China, covering different road scenes, such as bridges, curved roads, straight roads, cross-roads, tunnels, etc. Experiment results show that the proposed method achieved 77.2% image recognition rate, with about 19ms in average for localization from one image. The average positioning errors were within 5m. The results demonstrate that the proposed method is promising in term of positioning accuracy and speed to develop low-cost self-localization device for autonomous vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call