Abstract

Robotic autonomous assembly is critical in intelligent manufacturing and has always been a research hotspot. Most previous approaches rely on prior knowledge, such as geometric parameters and pose information of the assembled parts, which are hard to estimate in unstructured environments. This paper proposes a residual reinforcement learning (RL) policy for robotic assembly via combining visual and force information. The residual RL policy, which consists of a visual-based policy and a force-based policy, is trained and tested in an end-to-end manner. In the assembly procedure, the visual-based policy focuses on spatial search, while the force-based policy handles the interactive behaviors. The experimental results reveal the high sample efficiency of our approach, which exhibits the ability to generalize across diverse assembly tasks involving variations in geometries, clearances, and configurations. The validation experiments are conducted both in simulation and on a real robot.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call