Abstract

In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusing the information from these two sensors deserves to be explored. In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem. Our contribution can be summarized as: (1) applying the trimmed strategy in the point cloud global feature aggregation to improve the recognition performance, (2) building a compact fusion framework which captures both the robust representation of the image and 3D point cloud, and (3) learning a proper metric to describe the similarity of our fused global feature. The experiments on KITTI and KAIST datasets show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor.

Highlights

  • Place recognition has received a significant amount of attention in various fields including computer vision [1,2,3,4,5,6], autonomous driving systems [7,8,9,10] and augmented reality [11]

  • To make a fair comparison with different methods, we compare our approach with the existing open source algorithm NetVLAD [34] and PointNetVLAD [36] on the same device

  • Is a point-based approach combining the concept of PointNet and NetVLAD, which applies an image-based method to the point cloud

Read more

Summary

Introduction

Place recognition has received a significant amount of attention in various fields including computer vision [1,2,3,4,5,6], autonomous driving systems [7,8,9,10] and augmented reality [11]. In these tasks, place recognition addresses a question of “where am I in a route”. Features for visual information, such as Oriented FAST and Rotated Brief (ORB) [15], Scale-Invariant Feature Transform (SIFT) [16] and Speeded

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call