Abstract

Closed-loop feedback-driven control laws can be used to solve low-thrust many-revolution trajectory design and guidance problems with minimal computational cost. Lyapunov-based control laws offer the benefits of increased stability whilst their optimality can be increased by tuning their parameters. In this paper, a reinforcement learning framework is used to make the parameters of the Lyapunov-based Q-law state-dependent, increasing its optimality. The Jacobian of these state-dependent parameters is available analytically and, unlike in other optimisation approaches, can be used to enforce stability throughout the transfer. The results focus on GTO–GEO and LEO–GEO transfers in Keplerian dynamics, including the effects of eclipses. The impact of the network architecture on the behaviour is investigated for both time- and mass-optimal transfers. Robustness to navigation errors and thruster misalignment is demonstrated using Monte Carlo analyses. The resulting approach offers potential for on-board autonomous transfers and orbit reconfiguration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.