Abstract

Accurate relative pose estimation of a spacecraft during space landing operation is critical to ensure a safe and successful landing. This paper presents a 3D Light Detection and Ranging (LiDAR) based AI relative navigation architecture solution for autonomous space landing. The proposed architecture is based on a hybrid Deep Recurrent Convolutional Neural Network (DRCNN) combining a Convolutional Neural Network (CNN) with an Recurrent Neural Network (RNN) based on a Long–Short Term Memory (LSTM) network. The acquired 3D LiDAR data is converted into a multi-projected images and feed the DRCNN with depth and other multi-projected imagery. The CNN module of the architecture allows an efficient representation of features, and the RNN module, as an LSTM, provides robust navigation motion estimates. A variety of landing scenarios are considered, simulated and experimented to evaluate the efficiency of the proposed architecture. A LiDAR based imagery data (Range, Slope, and Elevation) is initially created using PANGU (Planet and Asteroid Natural Scene Generation Utility) software and an evaluation of the proposed solution using this data is conducted. Tests using an instrumented Aerial Robot in Gazebo software to simulate landing scenarios on a synthetic but representative lunar terrain (3D digital elevation model) is proposed. Finally, real experiments using a real flying drone equipped with a Velodyne VLP16 3D LiDAR sensor to generate real 3D scene point clouds while landing on a designed down scaled lunar moon landing surface are conducted. All the test results achieved show that the suggested architecture is capable of delivering good 6 Degree of Freedom (DoF) pose precision with a good and reasonable computation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call