Abstract

Robotic grasping in unstructured dense clutter remains a challenging task and has always been a key research direction in the field of robotics. In this paper, we propose a novel robotic grasping system that could use the synergies between pushing and grasping actions to automatically grasp the objects in dense clutter. Our method involves using fully convolutional action-value functions (FCAVF) to map from visual observations to two action-value tables in a Q-learning framework. These two value tables infer the utility of pushing and grasping actions, and the highest value with the corresponding location and orientation means the best place to execute action for the end effector. For better grasping, we introduce an active pushing mechanism based on a new metric, called Dispersion Degree, which describes how spread out the objects are in the environment. Then we design a coordination mechanism to apply the synergies of different actions based on the action-values and dispersion degree of the objects and make the grasps more effective. Experimental results show that our proposed robotic grasping system can greatly improve the robotic grasping success rate in dense clutter and also has the capability to be generalized to the new scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call