Abstract

Robot force control that needs to be customized for the robot structure in unstructured environments with difficult-to-tune parameters guarantees robots’ compliance and safe human–robot interaction in an increasingly expanding work environment. Although reinforcement learning provides a new idea for the adaptive adjustment of these parameters, the policy often needs to be trained from scratch when used in new robotics, even in the same task. This paper proposes the episodic Natural Actor-Critic algorithm with action limits to improve robot admittance control and transfer motor skills between robots. The motion skills learned by simple simulated robots can be applied to complex real robots, reducing the difficulty of training and time consumption. The admittance control ensures the realizability and mobility of the robot’s compliance in all directions. At the same time, the reinforcement learning algorithm builds up the environment model and realizes the adaptive adjustment of the impedance parameters during the robot’s movement. In typical robot contact tasks, motor skills are trained in a robot with a simple structure in simulation and used for a robot with a complex structure in reality to perform the same task. The real robot’s performance in each task is similar to the simulated robot’s in the same environment, which verifies the method’s effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call