Abstract

Purpose This study aims to find a feasible precise navigation model for the planed Lunar rover. Autonomous navigation is one of the most important missions in the Chinese Lunar exploration project. Machine vision is expected to be a promising option for this mission because of the dramatic development of an image processing technique. However, existing attempts are often subject to low accuracy and errors accumulation. Design/methodology/approach In this paper, a novel autonomous navigation model was developed, based on the rigid geometric and photogrammetric theory, including stereo perception, relative positioning and absolute adjustment. The first step was planned to detect accurate three-dimensional (3D) surroundings around the rover by matching stereo-paired images; the second was used to decide the local location and orientation changes of the rover by matching adjacent images; and the third was adopted to find the rover’s location in the whole scene by matching ground image with satellite image. Among them, the SURF algorithm that had been commonly believed as the best algorithm for matching images was adopted to find matched images. Findings Experiments indicated that the accurate 3D scene, relative positioning and absolute adjustment were easily generated and illustrated with the matching results. More importantly, the proposed algorithm is able to match images with great differences in illumination, scale and observation angle. All experiments and findings in this study proved that the proposed method could be an alternative navigation model for the planed Lunar rover. Originality/value With the matching results, an accurate 3D scene, relative positioning and absolute adjustment of rover can be easily generated. The whole test proves that the proposed method could be a feasible navigation model for the planed Lunar rover.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call