The application of multi-sensor fusion technology in indoor mobile robot navigation and localization has been increasingly gaining attention, especially in complex indoor environments where achieving high-precision autonomous navigation poses a significant challenge. This review summarizes various sensor fusion methods, including the combination of Light Detection and Ranging (LiDAR), Inertial Measurement Unit (IMU), Ultra-Wideband (UWB), and others, with a focus on discussing the progress of fusion algorithms such as Extended Kalman Filter (EKF) and Adaptive Monte Carlo Localization (AMCL) in improving navigation accuracy and stability. In addition, this paper explores the application of visual Simultaneous Localization and Mapping (SLAM) methods incorporating deep learning in indoor robot navigation. Finally, the main challenges currently faced by multi-sensor fusion technology in robot autonomous navigation are analyzed, and future research directions are proposed, aiming to provide valuable references for researchers in the field.
Read full abstract