Abstract

Abstract In recent years, the increasing demand for precise and intelligent systems has been driven by advancements in indoor unmanned
navigation and positioning technologies. Nonetheless, the indoor environment poses challenges as objects often intersect or obstruct
each other due to spatial constraints, significantly impeding sensor recognition capabilities and compromising localization accuracy.
This paper presented a multi-sensor fusion framework within a system primarily comprised of LiDAR, complemented by RGB D and IMU sensors, to mitigate these challenges. To mitigate point cloud misalignment arising from occlusions, an improved PL-ICP algorithm have been proposed, resulting in improved accuracy and speed in occlusion scenarios. Meanwhile, this paper enhanced the Otsu algorithm by leveraging RGB-D matching to extract additional feature information, consequently enhancing the system’s convergence when occlusion happens. Ultimately, the Extended Kalman Filter (EKF) algorithm is employed to fuse point cloud and image data. Extensive experimental results demonstrate that the proposed approach not only enhances localization accuracy and stability but also exhibits superior convergence characteristics, offering an economical and efficient solution for robotic navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call