Abstract
Traditional vision-based navigation algorithms are highly affected from non-nominal conditions, which comprise illumination conditions and environmental uncertainties. Thanks to the outstanding generalization capability and flexibility, deep neural networks (and AI algorithms in general) are excellent candidates to solve the aforementioned shortcoming of navigation algorithms. The paper presents a vision-based navigation system using a Convolutional Neural Network to solve the task of pinpoint landing on the Moon using absolute navigation, namely with respect to the Mean Earth/Polar Axis reference frame. The Moon landing scenario consists in the spacecraft descent on the South Pole from a parking orbit to the powered descent phase. The architecture features an Object Detection Convolutional Neural Network (ODN) trained with supervised learning approach. The CNN is used to extract features of the observed craters that are then processed by standard image processing algorithms in order to provide pseudo-measurements that can be used by navigation filter. The craters are matched against a database containing the inertial location of the known craters. An Extended Kalman Filter with time-delayed measurements integration is developed to fuse optical and altimeter information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.