Abstract

The application of space robotic manipulators and heightened autonomy for In-Orbit Servicing (IOS) represents a paramount pursuit for leading space agencies, given the substantial threat posed by space debris to operational satellites and forthcoming space endeavors. This work presents a guidance algorithm based on Deep Reinforcement Learning (DRL) to solve for space manipulator path planning during the motion-synchronization phase with the mission target. The goal is the trajectory generation and control of a spacecraft equipped with a 7-Degrees of Freedom (7-DoF) robotic manipulator, such that its end effector remains stationary with respect to the target point of capture. The Proximal Policy Optimization (PPO) DRL algorithm is used to optimize the manipulator’s guidance law, and the autonomous agent generates the desired joint rates of the robotic arm, which are then integrated and passed to a model-based feedback linearization controller. The agent is first trained to optimize its guidance policy and then tested extensively to validate the results against a simulated environment representing the motion synchronization scenario of an IOS mission.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call