Abstract

Abstract Robotic grasping is an important task for various industrial applications. However, combining detecting and grasping to perform a dynamic and efficient object moving is still a challenge for robotic grasping. Meanwhile, it is time consuming for robotic algorithm training and testing in realistic. Here we present a framework for dynamic robotic grasping based on deep Q-network (DQN) in a virtual grasping space. The proposed dynamic robotic grasping framework mainly consists of the DQN, the convolutional neural network (CNN), and the virtual model of robotic grasping. After observing the result generated by applying the generative grasping convolutional neural network (GG-CNN), a robotic manipulation conducts actions according to Q-network. Different actions generate different rewards, which are implemented to update the neural network through loss function. The goal of this method is to find a reasonable strategy to optimize the total reward and finally accomplish a dynamic grasping process. In the test of virtual space, we achieve an 85.5% grasp success rate on a set of previously unseen objects, which demonstrates the accuracy of DQN enhanced GG-CNN model. The experimental results show that the DQN can efficiently enhance the GG-CNN by considering the grasping procedure (i.e. the grasping time and the gripper’s posture), which makes the grasping procedure stable and increases the success rate of robotic grasping.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call