Abstract

Large-scale robotic skin with tactile sensing ability is emerging with the potential for use in close-contact human–robot systems. Although recent developments in vision-based tactile sensing and related learning methods are promising, they have been mostly designed for small-scale use, such as by fingers and hands, in manipulation tasks. Moreover, learning perception for such tactile devices demands a huge tactile dataset, which complicates the data collection process. To address this, this study introduces a multiphysics simulation pipeline, called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">SimTacLS</i> , which considers not only the mechanical properties of external physical contact but also the realistic rendering of tactile images in a simulation environment. The system utilizes the obtained simulation dataset, including virtual images and skin deformation, to train a tactile deep neural network to extract high-level tactile information. Moreover, we adopt a generative network to minimize sim2real inaccuracy, preserving the simulation-based tactile sensing performance. Last but not least, we showcase this sim2real sensing method for our large-scale tactile sensor ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">TacLink</i> ) by demonstrating its use in two trial cases, namely, whole-arm nonprehensile manipulation and intuitive motion guidance, using a custom-built tactile robot arm integrated with TacLink. This article opens new possibilities in the learning of transferable tactile-driven robotics tasks from virtual worlds to actual scenarios without compromising accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call