The extension of human spaceflight across an ever-expanding domain, in conjunction with intricate mission architectures demands a paradigm shift in autonomous navigation algorithms, especially for the powered descent phase of planetary landing. Deep learning architectures have previously been explored to perform low-dimensional localization with limited success. Due to the expectations regarding novel algorithms in the context of real missions, the proposed approaches must be rigorously evaluated in extraneous scenarios and demonstrate sufficient robustness. In the current work, a novel formulation is proposed to train CNN-based Deep Learning (DL) models in a multi-layer cascading architecture and utilize the resulting classification probabilities as regression weights to estimate the position of the lander spacecraft. The approach leverages image intensity and depth data provided by multiple sensors on board to accurately determine the spacecraft’s location relative to the observed terrain at a specific altitude. Navigation performance is validated through Monte Carlo analysis, demonstrating the efficacy of the proposed DL architecture and the subsequent state-estimation framework across several simulated scenarios. It shows tremendous promise in extending the multi-modal feature learning approach to realistic missions.
Read full abstract