Abstract

Spacecraft relative trajectory planning is central to many space missions like on-orbit service and debris removal. While early attempts of machine-learning-based trajectory planning have been witnessed, they still face the open problem of how to acquire sufficient training samples to conduct robust training. In this article, we have introduced a meta-learning framework to improve the adaptation ability of the planner when facing new initial conditions. To achieve this goal, we divided the training trajectories as sub-training samples and fake testing samples. Then, the meta planner is trained by repeatedly conducting a trail training-and-testing process. To this end, the gradient information of the meta learner is initially obtained on the subtraining sets and is further adjusted by looking at its testing performance on those fake testing data. Therefore, the meta planer explicitly take account of the potential testing performance into consideration and, hence, alleviate the overfitting phenomena with few training trajectories. Simulation results substantiate the effectiveness of our approach, as well as the advantages of quick adaptation to new initial conditions without overfitting.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.