Abstract

Robotic grasping plays an essential role in human-machine cooperation in various household and industrial applications. Although humans can instinctively execute grasps in an accurate, stable, and rapid way even under a constantly changing environment, intelligent grasping remains a challenging task for robots. As a prerequisite for grasping, robots need to correctly identify the best grasping location of unknown objects often based on an artificial intelligence approach, which is still a challenging problem. This paper proposes a new grasps-generation-and-selection convolutional neural network (GGS-CNN), which is trained and implemented in a digital twin of intelligent robotic grasping (DTIRG). By defining a grasp with 3-D position, rotation angle, and gripper width, the GGS-CNN generates grasp candidates by transforming the red–green-blue-depth images (RGB-D images) into feature maps and evaluating the quality of selected grasps. The GGS-CNN is trained in the virtual environment and the real world of the DTIRG to detect accurate grasps. In the grasping tests, the proposed GGS-CNN achieves grasping success rates of 96.7% and 93.8% for grasping single objects and cluttered objects, respectively, and obtains the best grasp from the RGB-D image in less than 40 ms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call