Abstract

To realize a robust robotic grasping system for unknown objects in an unstructured environment, large amounts of grasp data and 3D model data for the object are required; the sizes of these data directly affect the rate of successful grasps. To reduce the time cost of data acquisition and labeling and increase the rate of successful grasps, we developed a self-supervised learning mechanism to control grasp tasks performed by manipulators. First, a manipulator automatically collects the point cloud for the objects from multiple perspectives to increase the efficiency of data acquisition. The complete point cloud for the objects is obtained using the hand-eye vision of the manipulator and the truncated signed distance function algorithm. Then, the point cloud data for the objects are used to generate a series of six-degrees-of-freedom grasp poses, and the force-closure decision algorithm is used to add the grasp quality label to each grasp pose to realize the automatic labeling of grasp data. Finally, the point cloud in the gripper closing area corresponding to each grasp pose is obtained and used to train the grasp-quality classification model for the manipulator. The results of performing actual grasping experiments demonstrate that the proposed self-supervised learning method can increase the rate of successful grasps for the manipulator. Note to Practitioners—Most of the existing grasp planning methods of the manipulator are based on public datasets or simulation data to train model algorithms. Owing to the limited types of objects, the limited amount of data in the public datasets, and the lack of real sensor noise in the simulation data, the robustness of the trained algorithm model is insufficient, and it is difficult to apply to unstructured production environments. To solve the above problems, we propose a 6-DOF capture planning method based on self-supervised learning and introduce a self-supervised learning mechanism to solve the problem of grasp data acquisition in real scenes. The manipulator automatically collects object data from multiple perspectives, performs desktop-level 3D reconstruction, and finally uses the force-closure decision algorithm to automatically label the data in order to realize automatic acquisition and labeling of the grasp data in a real scenario. Preliminary experiments show that this method can obtain high-quality grasp data and can be applied to grasp operations in real multi-target and cluttered environments. However, it has not been tested in actual production environments. This paper focuses on the data acquisition module in the 6-DOF grasp planning framework. In future research, we will design a more efficient grasp planning module to improve the grasp efficiency of the manipulator.

Highlights

  • I N THE field of robotic arms, research in such areas as gripping, button operation and object propulsion [1] is popular

  • To solve the above problems, we propose a 6-DOF capture planning method based on self-supervised learning and introduce a self-supervised learning mechanism to solve the problem of grasp data acquisition in real scenes

  • The method only performed a planar object grasping study, the grasping angle has some limitations and is not suitable for spatial 6-DOF grasping. Another example of model-free object grasp technology is the grasp pose detection (GPD) method proposed by Gualtieri et al [8], which generates a series of candidate grasp poses by using the 3D point cloud of the object and geometric information on the parallel two-fingered gripper at the end of the manipulator and creates classification labels through the implementation of a force-closure analysis algorithm; the grasp pose quality is classified using a convolutional neural network (CNN)

Read more

Summary

A Self-Supervised Learning-Based 6-DOF Grasp Planning Method for Manipulator

Gang Peng , Member, IEEE, Zhenyu Ren , Hao Wang , Xinde Li , Senior Member, IEEE, and Mohammad Omar Khyam. Note to Practitioners—Most of the existing grasp planning methods of the manipulator are based on public datasets or simulation data to train model algorithms. Owing to the limited types of objects, the limited amount of data in the public datasets, and the lack of real sensor noise in the simulation data, the robustness of the trained algorithm model is insufficient, and it is difficult to apply to unstructured production environments. Preliminary experiments show that this method can obtain high-quality grasp data and can be applied to grasp operations in real multi-target and cluttered environments. It has not been tested in actual production environments.

INTRODUCTION
PROBLEM STATEMENT
Desktop-Level 3D Reconstruction
Self-Supervised Learning Mechanism
DEEP LEARNING-BASED GRASP QUALITY CLASSIFICATION
EXPERIMENTAL RESULTS AND ANALYSIS
Data Acquisition Experiment
Grasp Experiment
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call