Abstract

The assembly operations for robots in unstructured environments face tasks with generalized geometric information. The most essential issue for geometry perception is to estimate the relative pose between mating parts and then plan the corresponding actions. Current perception methods mainly focus on template matching of specific parts, which is still lacking in geometric generalization. To enhance the perception generalization ability, we propose an operation framework combining geometry perception and motion planning for robotic assembly manipulation. Semantic segmentation embodies a strong generalization capacity in image processing, which we utilized to extract the geometric features of assembly parts. The networks are trained via a self-made peg-hole image dataset. To reduce the measurement noise, the virtual point clouds for the peg and hole section are reconstructed with semantic mask and camera image principle, and a uniformization algorithm is adopted to raise the quality of the point clouds. The registration is utilized with the noise-free, uniform virtual point clouds, and the relative pose between two mating parts can be estimated more precisely. Also, the sequence of interactive geometry perception actions is defined. The performance of our framework is validated on assembly experiments, including different geometries in and out of our dataset. The results show that the framework proposed in this paper can perceive the generalization of geometry features for assembly parts, and leads to intelligent robotic assembly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call