Abstract

Autonomous capabilities could be an essential aspect of future near-Earth asteroid exploration missions, enabling a fleet of low-cost spacecraft to be distributed to various targets for increased scientific and engineering returns. This paper studies the design of an adaptive offline policy for a surface imaging task under the influence of maneuver noise via reinforcement learning (RL). An adaptive policy that responds to the changes in the environment is obtained by including asteroid parameters as part of the feedback state and randomizing them during the training. The proximal policy optimization algorithm is used to train the policy. The robustness of the policy is tested in an environment that has unmodeled dynamical effects. Further, the overall performance of the autonomous exploration scheme is studied by combining the RL-based policy with the previously proposed autonomous navigation strategy that is built around optical and measurements. The end-to-end simulations that combine both onboard navigation and guidance are performed using asteroid Bennu as an example target, and the results show that the proposed scheme is robust.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.