Abstract

In manufacturing industries, vast potential exists in regards to the adaptability of automated systems through efficient robotic skill acquisition. This paper examines the use of deep reinforcement learning to automate the process of contact rich compliant assembly. Thereby, we consider an exemplary real-world use case in car assembly. To obtain training data, we use a simulated representation of the production system comprising a robotic arm which is controlled through a deep reinforcement learning agent trained with proximal policy optimization. Furthermore, we conduct a basic system analysis to improve the similarity between our physical and simulated environments. After iteratively training and evaluating different models, which distinguish through the reward design and the grade of environment variation, we validate the results on the physical hardware. We successfully obtain agents that generate expedient trajectories, which can generalize to changing environments. Success rates clearly above 90% can be achieved in simulation even with high grades of variation of the target position and the parts’ surface friction. For the transfer to the physical assembly system, we conclude that further optimization is necessary to obtain truly compliant behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call