Abstract

The vision-based robot pose estimation and mapping system has the disadvantage of low pose estimation accuracy and poor local detail mapping effects, while the modeling environment has poor features, high dynamics, weak light, and multiple shadows, among others. To address these issues, we propose an adaptive pose fusion (APF) method to fuse the robot’s pose and use the optimized pose to construct an indoor map. Firstly, the proposed method calculates the robot’s pose by the camera and inertial measurement unit (IMU), respectively. Then, the pose fusion method is adaptively selected according to the motion state of the robot. When the robot is in a static state, the proposed method directly uses the extended Kalman filter (EKF) method to fuse camera and IMU data. When the robot is in a motive state, the weighted coefficient is determined according to the matching success rate of the feature points, and the weighted pose fusion (WPF) method is used to fuse camera and IMU data. According to the different states, a series of new poses of the robot are obtained. Secondly, the fusion optimized pose is used to correct the distance and azimuth angle of the laser points obtained by LiDAR, and a Gauss–Newton iterative matching process is used to match the corresponding laser points to construct an indoor map. Finally, a pose fusion experiment is designed, and the EuRoc data and the measured data are used to verify the effectiveness of this method. The experimental results confirm that this method provides higher pose estimation accuracy compared with the robust visual inertial odometry (ROVIO) and visual-inertial ORB-SLAM (VI ORB-SLAM) algorithms. Compared with the Cartographer algorithm, this method provides higher two-dimensional map modeling accuracy and modeling performance.

Highlights

  • An accurate and efficient pose estimation system is an essential enabler for motion control and path planning in autonomous robots

  • If the robot is in a static state, the position and posture of camera and inertial measurement unit (IMU) are fused using the extended Kalman filter (EKF) method

  • In cases where the robot is in a motive state, the position and posture of the camera and IMU are fused using the weighted pose fusion (WPF) method

Read more

Summary

Introduction

An accurate and efficient pose estimation system is an essential enabler for motion control and path planning in autonomous robots. In the case of indoor environments, SLAM technology is one of the effective means of solving robotic environmental cognition and positioning navigation [1]. SLAM refers to the process of a robot equipped with a specific sensor to establish an environmental model in motion and estimate its motion at the same time [2]. Depending on the available sensors, SLAM can be divided into visual SLAM, laser. With the development of computer vision, image processing, and artificial intelligence, the accuracy of visual SLAM has been improved. Applying the visual SLAM to scenes with poor features, high dynamics, and/or weak light results in low performance [5]. The IMU sensor can provide complementarity information to improve the performance of visual SLAM

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.