Abstract
A reinforcement learning-based approach is proposed to design the multi-impulse rendezvous trajectories in linear relative motions. For the relative motion in elliptical orbits, the relative state propagation is obtained directly from the state transition matrix. This rendezvous problem is constructed as a Markov decision process that reflects the fuel consumption, the transfer time, the relative state, and the dynamical model. An actor–critic algorithm is used to train policy for generating rendezvous maneuvers. The results of the numerical optimization (e.g., differential evolution) are adopted as the expert data set to accelerate the training process. By deploying a policy network, the multi-impulse rendezvous trajectories can be obtained on board. Moreover, the proposed approach is also applied to generate a feasible solution for many impulses (e.g., 20 impulses), which can be used as an initial value for further optimization. The numerical examples with random initial states show that the proposed method is much faster and has slightly worse performance indexes when compared with the evolutionary algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.