Abstract

This work proposes a new Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative robotic space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture suggests a Deep Recurrent Convolutional Neural Network (DRCNN) that exploits multi-projected imagery of the acquired 3D LIDAR data. Advantages of the proposed DRCNN are; an effective feature representation facilitated by the Convolutional Neural Network module within DRCNN, a robust modeling of the navigation dynamics due to the Recurrent Neural Network incorporated in the DRCNN, and a low processing time. Our trials evaluate several current state-of-the-art space navigation methods on various simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Additionally, we evaluate real satellite LIDAR data acquired in our lab. Results demonstrate that the proposed architecture, although trained solely on simulated data, is highly adaptable and is more appealing compared to current algorithms on both simulated and real LIDAR data scenarios affording better odometry accuracy at lower computational requirements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.