Abstract

Planetary surface is a complex unstructured environment with strong light, strong shadows, weak textures and numerous obstacles. There is no GPS signal support and satellite networking support on the planetary surface. The limited computer and sensor capabilities of planetary rover have a significant gap from the ground autonomous driving configuration, which presents a great challenge to the autonomous navigation methods of planetary rover. Extreme lighting conditions will increase the camera mismatching rate, and poor texture will increase the lidar matching error, which will reduce the estimation accuracy. Moreover, camera and lidar produce great motion distortion in the rugged terrain and bumpy environment, which leads to a large increase in the cumulative error of odometer. Camera and lidar are not suitable as the main sensor of SLAM algorithm in unstructured environment because they are sensitive to the change of environment. Therefore, an autonomous navigation method for planetary rover based on multi-modal fusion and multi-factor graph optimization is presented in this paper. The IMU (Inertial Measurement Unit) odometer node is added to the lidar odometer and visual odometer thread, and the IMU is taken as the central node to build a multi-factor graph optimization model. A multi-factor graph optimization model and a strong adaptive constraint strategy with reasonable weights are constructed. The pose estimation of other sensors is used to constrain IMU bias, meanwhile historical rut tracking, obstacle contour matching and skyline feature matching are introduced. Finally, motion prediction is achieved in the IMU odometer node. Simulation results and field tests show that this method can effectively cope with the unstructured environment on planetary surface, which achieves adaptive and robust autonomous navigation of planetary rover.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call