Abstract

This article introduces a visual–tactile multimodal grasp data set, aiming to further the research on robotic manipulation. The data set was built by the novel designed dexterous robot hand, the Intel’s Eagle Shoal robot hand (Intel Labs China, Beijing, China). The data set contains 2550 sets data, including tactile, joint, time label, image, and RGB and depth video. With the integration of visual and tactile data, researchers could be able to better understand the grasping process and analyze the deeper grasping issues. In this article, the building process of the data set was introduced, as well as the data set composition. In order to evaluate the quality of data set, the tactile data were analyzed by short-time Fourier transform. The tactile data–based slip detection was realized by long short-term memory and contrasted with visual data. The experiments compared the long short-term memory with the traditional classifiers, and generalization ability on different grasp directions and different objects is implemented. The results have proved that the data set’s value in promoting research on robotic manipulation area showed the effective slip detection and generalization ability of long short-term memory. Further work on visual and tactile data will be devoted to in the future.

Highlights

  • In recent years, dexterous robotic grasping increasingly attracts worldwide attention

  • Levine et al used 14 robots to randomly grasp over 800,000 times and collected grasping data to train convolutional neural network (CNN) that teach robots to grasp.[2]

  • The results show that the success rate is significantly influenced by the number of samples

Read more

Summary

Introduction

Dexterous robotic grasping increasingly attracts worldwide attention. Planning.[3] Mahler et al built a data set including millions of point cloud data to train grasp quality CNN (GQ-CNN) with analytic metric and used GQ-CNN to select the best grasp plan which achieves 93% success rate with eight types of known objects.[4,5,6] Zhang et al trained robots to manipulate objects with demonstration videos which are inputted through virtual reality. They have explored the sample complexity of learning a specific manipulation task in their system. For grasping and placing tasks, the success rate increases from 20% to 80% when the number of samples increases from 11 to 109.7 sufficient high-quality data are the key to unlock the door of dexterous robotic manipulation

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call