Abstract

This paper presents a novel approach to grasp detection of unknown items with Deep Semantic Segmentation. The approach can detect whether an object is occluded or graspable and offers the possibility to determine grasp vectors for several grippers. This is accomplished with the help of Semantic Segmentation. The Segmentation is tested with Fully Convolutional Networks (FCN) [1] and DeepLab V3+ [2]. The neural networks use RGB images, depth images or both. The size, speed and functionality of the neural networks with different inputs are discussed. A determination for grasp vectors for parallel grippers is explained and experiments are carried out. In contrast to current methods, which only calculate a single grasp vector, our approach can determine all possible grasp vectors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call