Abstract

SummaryEstimating the robot state within a known map is an essential problem for mobile robot; it is also referred to “localization”. Even LiDAR-based localization is practical in many applications, it is difficult to achieve global localization with LiDAR only for its low-dimension feedback, especially in environments with repetitive geometric features. A sensor-fusion-based localization system is introduced in this paper, which has the capability of addressing the global localization problem. Both LiDAR and vision sensors are integrated, making use of the rich information introduced by vision sensor and the robustness from LiDAR. A hybrid grid-map is built for global localization, and a visual global descriptor is applied to speed up the localization convergence, combined with a pose refining pipeline for improving the localization accuracy. Also, a trigger mechanism is introduced to solve kidnapped problem and verify the relocalization result. The experiments under different conditions are designed to evaluate the performance of the proposed approach, as well as a comparison with the existing localization systems. According to the experimental results, our system is able to solve the global localization problem, and the sensor-fusion mechanism in our system has an improved performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call