Abstract

Robot grasping under unstructured environment is a research hotspot in the realization of human-computer interaction and automation. To solve the problem of grasping efficiency and grasping of multiple object scales under unstructured environment, we propose modified GG-CNN model, a fast and efficient robotic grasping method. The proposed modified GG-CNN model uses RGB-D image as input to obtain features by convolutional layers and then reshape them back to the original size by transposed convolutional layers to estimate grasp quality and angle. The resulting grasp pose is chosen to have largest grasp quality. To achieve better robustness, background plane is estimated by the Least Square Method and used as workspace frame so that the height value of background plane remains zero. The modified model is trained and evaluated based on the Cornell Grasp Dataset and the accuracy rate is 96.63%. The proposed method is used to grasp household objects and fruits, including some challenging small objects, and the grasp success rates are 94.5% and 98.0% respectively. Experimental results show that our method can perform real-time robotic grasping under unstructured environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call