Abstract

Grasping and manipulating transparent objects with a robot is a challenge in robot vision. To successfully perform robotic grasping, 6D object pose estimation is needed. However, transparent objects are difficult to recognize because their appearance varies depending on the background, and modern 3D sensors cannot collect reliable depth data on transparent object surfaces due to the translucent, refractive, and specular surfaces. To address these challenges, we proposed a 6D pose estimation of transparent objects for manipulation. Given a single RGB image of transparent objects, the 2D keypoints are estimated using a deep neural network. Then, the PnP algorithm takes camera intrinsics, object model size, and keypoints as inputs to estimate the 6D pose of the object. Finally, the predicted poses of the transparent object were used for grasp planning. Our experiments demonstrated that our picking system is capable of grasping transparent objects from different backgrounds. To the best of our knowledge, this is the first time a robot has grasped transparent objects from a single RGB image. Furthermore, the experiments show that our method is better than the 6D pose estimation baselines and can be generalized to real-world images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call