With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, the global localization of heterogeneous robots under complex environments remains challenging. Most of the existing point cloud global localization methods perform poorly due to the different perspective views of heterogeneous robots. Leveraging existing HD maps, this paper proposes a base-map-guided heterogeneous robots localization solution. A novel co-view context descriptor with rotational invariance is developed to represent the characteristics of heterogeneous point clouds in a unified manner. The pre-set base map is divided into virtual scans, each of which generates a candidate co-view context descriptor. These descriptors are assigned to robots before operations. By matching the query co-view context descriptors of a working robot with the assigned candidate descriptors, the coarse localization is achieved. Finally, the refined localization is done through point cloud registration. The proposed solution can be applied to both single-robot and multi-robot global localization scenarios, especially when communication is impaired. The heterogeneous datasets used for the experiments cover both indoor and outdoor scenarios, utilizing various scanning modes. The average rotation and translation errors are within 1° and 0.30 m, indicating the proposed solution can provide reliable localization support despite communication failures, even across heterogeneous robots.
Read full abstract