Abstract

The method of tactile perception can accurately reflect the contact state by collecting force and torque information, but it is not sensitive to the changes in position and posture between assembly objects. The method of visual perception is very sensitive to changes in pose and posture between assembled objects, but they cannot accurately reflect the contact state, especially since the objects are occluded from each other. The robot will perceive the environment more accurately if visual and tactile perception can be combined. Therefore, this paper proposes the alignment method of combined perception for the peg-in-hole assembly with self-supervised deep reinforcement learning. The agent first observes the environment through visual sensors and then predicts the action of the alignment adjustment based on the visual feature of the contact state. Subsequently, the agent judges the contact state based on the force and torque information collected by the force/torque sensor. And the action of the alignment adjustment is selected according to the contact state and used as a visual prediction label. Whereafter, the network of visual perception performs backpropagation to correct the network weights according to the visual prediction label. Finally, the agent will have learned the alignment skill of combined perception with the increase of iterative training. The robot system is built based on CoppeliaSim for simulation training and testing. The simulation results show that the method of combined perception has higher assembly efficiency than single perception.

Highlights

  • It is an important challenge for the intelligent robot to fully observe environmental information in the complex unstructured environment

  • The hole-finding stage uses visual perception, and the alignment stage uses tactile perception, and this method is called multiple perceptions in stages (MP)

  • We proposed an alignment method of combined perception for peg-in-hole assembly with self-supervised deep reinforcement learning

Read more

Summary

Introduction

It is an important challenge for the intelligent robot to fully observe environmental information in the complex unstructured environment. It is difficult to meet current complex work demands only relying on a single type of sensor to perceive the environment. Traditional programming methods in assembly tasks require technicians with a high technical level and rich work experience to complete a large amount of code compilation and parameter deployment. This takes time and effort and limits the flexibility of the production line. The teaching method still requires a large number of parameter deployments like the traditional programming method. Robots mainly rely on visual and tactile perception methods to perceive the environment in the interacting process

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call