Abstract

Abstract Major space agencies have an increasing interest in highly accurate (200 m) autonomous landing on the Moon. Inertial-only navigation is not compatible with this challenging requirement. The techniques currently investigated rely on vision-based navigation. A first approach consists in tracking features between sequences of images in order to measure the angular rate as well as the direction of the velocity vector of the spacecraft. A second approach aims at identifying image features using a geo-referenced on-board database to determine the attitude and the position of the spacecraft. However, existing algorithms are computationally prohibitive and have a limited robustness to varying illumination conditions and surface characteristics. This paper presents the development of an innovative autonomous vision-based navigation system addressing these problems. Numerical simulations have shown that this system is capable of estimating the position and velocity of the vehicle with an accuracy better than 100 m and 0.1 m/s respectively. This work is the result of a successful collaboration between the Universite de Sherbrooke, NGC Aerospace Ltd., Thales Alenia Space and the European Space Agency. The proposed system has been selected as the main navigation algorithm in three major research and development projects sponsored by European Space Agency and the Canadian Space Agency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.