Abstract. At present, the LiDAR system can achieve high precision and is extensively used for indoor and outdoor mobile positioning and mapping. However, LiDAR systems still face issues in cluttered environments where strong features are absent, leading to a degradation of the LiDAR-based solution. When the carrier movement involves high-speed or prolonged exposure to the mirror wall, it can cause severe degradation issues or even positioning failures in the laser slam system. Event cameras are vision sensors inspired by biology that exhibit strong robustness in high dynamic and low texture environments, potentially leading to better performance in such environments. However, some issues with event cameras remain unresolved. In this paper, a multi-source fusion method based on EVIO, LIO and IMU trajectory layer post-processing method is proposed, which will fully consider the robustness of event camera in high dynamic environment and the high precision advantage of Lidar in conventional environment, and use an algorithm based on normalized uncertainty. The elastic multi-source fusion of event camera and LiDAR is realized and tested in realde environment. Experimental results show that the proposed algorithm can effectively improve the accuracy of event camera and LiDAR. Compared to the current more advanced algorithms, the proposed algorithm effectively addresses the problem of LIO in degraded environments. Additionally, it mitigates the scale inaccuracy and divergence of the EVIO trajectory to some extent. Compared to LIO, the algorithm can reduce the maximum position error by approximately 30% and increase the overall position accuracy by 32%. Additionally, it can significantly constrain the divergence of errors in the Y direction, improving its accuracy by about 75% and 65% compared to the LIO and EVIO algorithms, respectively.
Read full abstract