Abstract

In recent years, intelligent robotic sorting has be-come a popular application of industrial robots, whose performance is to a large extent affected by object estimation. Many researchers have been devoted to design efficient pose estimation methods based on deep learning. However, there remains two challenges. One is that the annotation of the dataset is labor-intensive and time-consuming, which makes it difficult to build large pose estimation datasets. The other is that, for objects of interest, the final pose estimation depends on an accurate 3D model of the object, which makes most existing methods of object pose estimation stay at the instance level, i.e., only objects known to the method can be identified. To address the above challenges, this paper employs the game engine to build a virtual dataset that can be automatically annotated, and proposes a shape-based robot vision sorting approach that can efficiently classify and grasp objects with regular shapes. Experimental results indicate that the proposed approach can achieve category-level object pose estimation and thus make robot grasping more applicable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call