Abstract

The manufacturing industry is undergoing rapid evolution, necessitating flexible and adaptable robots. However, configuring such machines requires technical experts, which are hard to find, especially for small and medium enterprises. Therefore, the process needs to be simplified by allowing non-experts to configure robots. During such configuration, one key aspect is the definition of objects’ grasping poses. The literature proposes deep learning techniques to compute grasping poses automatically and facilitate the process. Nevertheless, practical implementation for inexperienced factory operators can be challenging, especially if task-specific knowledge and constraints should be considered. To overcome this barrier, we propose an approach that facilitates teaching such poses. Our method, employing a novel user grasp metric, combines the operator's initial grasp guess given by a 3D spatial device with a state-of-the-art deep learning algorithm, thus returning reliable grasping poses but simultaneously close to the operator's initial guess. We compare this approach against commercial grasping pose definition interfaces through a user test involving 28 participants and against state-of-the-art deep learning grasp estimators. The results demonstrate a significant improvement in system usability (+24%) and a reduced workload (-16%). Furthermore, our experiments reveal an increased grasp success rate when utilizing the user grasp metric, surpassing state-of-the-art deep learning grasping estimators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call