Abstract

Slip detection plays a vital role in robotic dexterous grasping and manipulation, and it has long been a challenging problem in the robotic community. Different from traditional tactile perception-based methods, we propose a Generalized Visual-Tactile Transformer (GVT-Transformer) network to detect slip based on visual and tactile spatiotemporal sequences. The main novelty of GVT-Transformer is its ability to address unaligned vision and tactile data in various formats captured by various tactile sensors. Furthermore, we train and test our proposed network on a public and our visual-tactile grasping datasets. The experimental results show that our method is more suitable for sliding detection tasks than previous visual-tactile learning methods and more versatile.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.