Abstract

Place recognition is critical for both offline mapping and online localization. However, current single-sensor based place recognition still remains challenging in adverse conditions. In this paper, a heterogeneous measurement based framework is proposed for long-term place recognition, which retrieves the query radar scans from the existing lidar (Light Detection and Ranging) maps. To achieve this, a deep neural network is built with joint training in the learning stage, and then in the testing stage, shared embeddings of radar and lidar are extracted for heterogeneous place recognition. To validate the effectiveness of the proposed method, we conducted tests and generalization experiments on the multi-session public datasets and compared them to other competitive methods. The experimental results indicate that our model is able to perform multiple place recognitions: lidar-to-lidar (L2L), radar-to-radar (R2R), and radar-to-lidar (R2L), while the learned model is trained only once. We also release the source code publicly: https://github.com/ZJUYH/radar-to-lidar-place-recognition.

Highlights

  • Place recognition is a basic technique for both field robots in the wild and automated vehicles on the road, which helps the agent to recognize revisited places when traveling

  • With the development of Frequency-Modulated Continuous-Wave (FMCW) radar sensor1, the mapping and localization topics are studied in the recent years, for example the RadarSLAM (Hong et al, 2020), radar odometry (Cen and Newman, 2018; Barnes et al, 2020b), and radar localization on lidar maps (Yin et al, 2020, 2021)

  • Many mobile robots and vehicles are equipped with multiple sensors and various perception tasks can be achieved via heterogeneous sensor measurements, for example, visual localization on point cloud maps (Ding et al, 2019; Feng et al, 2019) and radar data matching on satellite images (Tang et al, 2020)

Read more

Summary

INTRODUCTION

Place recognition is a basic technique for both field robots in the wild and automated vehicles on the road, which helps the agent to recognize revisited places when traveling. There still remain different problems in the conventional single-sensor based place recognition, and we present a case study in Figure 1 for understanding. These problems arise from the sensor itself at the front-end and not the recognition algorithm at the back-end. Given that large-scale high-definition lidar maps have been deployed for commercial use (Li et al, 2017), a radar-tolidar (R2L) based place recognition is a feasible solution, which is robust to the weather changes and does not require extra radar mapping session, making the place recognition module more applicable in the real world. The trained model achieves homogeneous place recognition for radar or lidar, and the heterogeneous task for R2L.

Visual-Based Place Recognition
Lidar-Based Place Recognition
Radar-Based Mapping and Localization
Multi-Modal Measurements for Robotic Perception
METHODS
Building Lidar Submaps
Signature Generation
Joint Training
EXPERIMENTS
Implementation and Experimental Setup
Single-Session
Multi-Session
Case Study
CONCLUSION
Findings
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call