Abstract

Landing an autonomous spacecraft within 100 m of a mapped target is a navigation challenge in planetary exploration. Vision-based approaches attempt to pair 2D features detected in camera images with 3D mapped landmarks to reach the required precision. This paper presents a vision-aided inertial navigation system for pinpoint planetary landing called LION (Landing Inertial and Optical Navigation). It can fly over any type of terrain, whatever the topography. LION uses measurements from a novel image-to-map matcher in order to update through a tight data fusion scheme the state of an extended Kalman filter propagated with inertial data. The image processing uses the state and covariance predictions from the filter to determine the regions and extraction scales in which to search for non-ambiguous landmarks in the image. The image scale management process operates per landmark and greatly improves the repeatability rate between the map and descent images. A lunar-representative optical test bench called Visilab was also designed in order to test LION. The observability of absolute navigation performances in Visilab is evaluated with a model developed specifically for this purpose. Finally, the system performances are evaluated at a number of altitudes along with its robustness to off-nadir camera angle, illumination changes, a different map generation process and non-planar topography. The error converges to a mean of 4 m and a 3-RMS dispersion of 47 m at 3 km of altitude on the test setup at scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call