Abstract
Tactile feedback is a key sensory channel that contributes to our ability to perform precise manipulations. In this regard, sensor skin provides robots with the sense of touch making them increasingly capable of dexterous object manipulation. However, in applications like teleoperation, the complex sensory input of an infinite number of different textures must be projected to the human user's skin in a meaningful manner. In addressing this issue, a deep gated recurrent unit-based autoencoder (GRU-AE) that captured the perceptual dimensions of tactile textures in latent space was deployed to implicitly understand unseen textures. The expression of unknown textures in this latent space allowed for the definition of a control law to effectively drive tactile displays and to convey tactile feedback in a psycho-physically meaningful manner. The approach was experimentally verified by evaluating the prediction performance of the GRU-AE on seen and unseen data that were gathered during active tactile exploration of objects commonly encountered in daily living. A user study on a custom-made tactile display was conducted in which real tactile perceptions in response to active tactile object exploration were compared to the emulated tactile feedback using the proposed tactile feedback loop. The results suggest that the deep GRU-AE for tactile display control offers an effective and intuitive method for efficient end-to-end tactile feedback during active tactile texture exploration.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.