Abstract
With the rise of traffic around Earth’s orbit, spacecraft mission designs have placed an unprecedented demand on the capabilities of autonomous systems. In the early 2000s, the state-of-the-art autonomous spacecraft controllers were designed for static and uncluttered environments. A little over a decade later, the challenges facing spacecraft autonomy now include cluttered, dynamic environments with time-varying constraints, logical modes, fault tolerances, uncertain dynamics, and complex maneuvers. With this rise in complexity, many areas of research have been investigating more experimental control strategies, such as reinforcement learning (RL), as a potential solution to this problem. The research presented herein aims to expand on efforts to quantify the use of RL in autonomous rendezvous, proximity operations, and docking (ARPOD) environments, with consideration to the inherent drawbacks of the more common algorithms present in the field. We present hierarchical model-based RL as a solution to an autonomous docking problem. This algorithm can learn satellite parameters, extrapolate trajectory information, and learn uncertain dynamics via data collection. By using gradient-free model predictive control logic, the algorithm can handle nondifferentiable objectives and complex constraints. Lastly, the hierarchical structure demonstrates an ability to generate feasible trajectories in the presence of integrated third-party subcontrollers commonly found in spacecraft. This study highlights the ability of the hierarchical algorithm to combine and manipulate third-party subpolicies to achieve trajectories not previously trained on.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.