Abstract

For robots in an unstructured work environment, grasping unknown objects that have neither model data nor RGB data is very important. The key to robotic autonomous grasping is not only in the judgment of object type but also in the shape of the object. We present a new grasping approach based on the basic compositions of objects. The simplification of complex objects is conducive to the description of object shape and provides effective ideas for the selection of grasping strategies. First, the depth camera is used to obtain partial 3D data of the target object. Then the 3D data are segmented and the segmented parts are simplified to a cylinder, a sphere, an ellipsoid, and a parallelepiped according to the geometric and semantic shape characteristics. The grasp pose is constrained according to the simplified shape feature and the core part of the object is used for grasping training using deep learning. The grasping model was evaluated in a simulation experiment and robot experiment, and the experiment result shows that learned grasp score using simplified constraints is more robust to gripper pose uncertainty than without simplified constraint.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call