This paper introduces an innovative algorithm aimed at enhancing robot learning using dynamic trajectory modeling and time-dependent state analysis. By integrating reinforcement learning (RL) and trajectory planning, the proposed approach enhances the robot’s adaptability in diverse environments and tasks. The framework begins with a comprehensive analysis of the robot’s operational space, focusing on Cartesian coordinates and configuration systems. By modeling trajectories and states within these systems, the robot achieves sequential tracking of arbitrary states, facilitating efficient task execution in various scenarios. Experimental results demonstrate the algorithm’s efficacy in manipulation tasks and path planning in dynamic environments. By integrating dynamic trajectory modeling and time-dependent state analysis, the robot’s adaptability and performance improve significantly, enabling precise task execution in complex environments. This research contributes to advancing robot learning methodologies, particularly in human–robot interaction scenarios, promising applications in manufacturing, healthcare, and logistics.
Read full abstract