Abstract

The future of humanity in space will require, more and more frequently, proximity operations with unexplored celestial bodies. In particular, asteroids and comets will have a crucial role in the future space economy and exploration. These bodies are often characterized by unknown terrain maps and lack of navigation infrastructure; this makes autonomous navigation challenging to accomplish. In this context, visual matching algorithms are not able to perform navigation if the map and the images captured online by the probe differ significantly for illumination conditions, scaling or rotation. To overcome these issues, in this work, we propose a siamese convolutional neural network capable of image matching and a position retrieval system for reliable autonomous navigation. The system is robust to image noise, reusable on multiple terrains and landing sites, and it does not require deploying any additional hardware component. In this work, the OSIRIS-REx Nasa’s mission was taken as reference for defining the navigation requirements and a 3D model of Bennu has been built to render training data. The image matching capabilities of the system have been tested on one validation dataset made of rendered images and one made of real images provided by NASA’s OSIRIS-REx mission. Besides, realistic descent scenarios have been simulated to test the system navigation accuracy in simulated but realistic conditions, and to evaluate the error recovery capabilities of the developed system. The system achieved mission compliant navigation accuracy on both real and simulated terrain maps, showing remarkable generalization capability highlighting the generality of the proposed solution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call