This paper presents a study on path planning for 6-DOF free-floating space robotic manipulators using Deep Deterministic Policy Gradient-based Reinforcement Learning. The focus is the development of a novel reward function tailored to address critical requirements for efficient and effective manipulation in space. These requirements include accurate pose alignment between the end-effector and the target, collision avoidance with both the target and other links of the manipulator, smoothing of joint velocities, adaptability to strong dynamic coupling between the manipulator and its base spacecraft due to high manipulator-spacecraft mass ratio, and resilience to noise in the state observations. Uniquely, the proposed reward function employs quaternions for orientation control to reduce pose misalignments and dynamic singularities, as opposed to traditional Euler angles. Our findings demonstrate that the Reinforcement Learning algorithm, when guided by this new reward function that integrates these enhancements and constraints, not only achieves the desired path planning objectives more efficiently but also exhibits faster convergence. Furthermore, the Reinforcement Learning successfully manages significant dynamic coupling effects caused by a high mass ratio between the robotic manipulator and the base spacecraft. Even under the challenge of noisy state observations, the trained agent successfully completes the path planning task, proving the Reinforcement Learning's applicability to real-space mission designs where the noise in observation is inevitable. The study highlights the critical role of reward function design in the Reinforcement Learning training process and its consequential impact on the solution quality, providing a solid foundation for future advancements in free-floating space robotic missions.
Read full abstract