Abstract

Autonomous navigation of a rover on Mars surface can improve very significantly the daily traverse, particularly when driving away from the lander, into unknown areas. The autonomous navigation process developed at CNES is based on stereo cameras perception, used to build a model of the environment and generate trajectories. Multiple perception merging with propagation of the locomotion and localization errors have been implemented. The algorithms developed for Mars exploration programs, the vision hardware, the validation tools, experimental platforms and results of evaluation are presented. Portability and the evaluation of computing resources for implementation on a Mars rover are also addressed. The results show that the implementation of autonomy requires only a very small amount of energy and computing time and that the rover capabilities are fully used, allowing a much longer daily traverse than what is enabled by purely ground-planned strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.