Abstract

Traditional vision-based navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using a Convolutional Neural Network to solve the task of pinpoint landing on the Moon using absolute navigation, namely with respect to the Mean Earth/Polar Axis reference frame. The Moon landing scenario consists in the spacecraft descent on the South Pole from a parking orbit to the powered descent phase. The architecture features an Object Detection Convolutional Neural Network (ODN) trained with supervised learning approach. The CNN is used to extract features of the observed craters that are then processed by standard image processing algorithms in order to provide pseudo-measurements that can be used by navigation filter. The craters are matched against a database containing the inertial location of the known craters. An Extended Kalman Filter with time-delayed measurements integration is developed to fuse optical and altimeter information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call