Abstract

The sense of touch is often crucial for humans to perform manipulation tasks. Providing tactile feedback during teleoperation or for users of prosthetic devices would be beneficial. However, the representation of tactile information constitutes a major technical challenge, since the numerous and possibly multimodal sensor readings are massive compared to the available tactile display technology. We introduce an algorithm that deploys two stages of K-means clustering along and across tactile image frames that render tactile sensor information at each time instant. In this manner, the massive tactile information is adaptively compressed in real-time while preserving its physical meaning, thus, remains intuitive and direct. We experimentally verify and examine the characteristics of our algorithm by evaluating the original and compressed tactile data. The data was gathered during the active tactile exploration of several objects of daily living by an Allegro robot hand that was covered with 15 uSkin sensor modules providing 2403-axis force vector measurements at each time instant. Our novel algorithm is straight forward enough to be implemented into tactile feedback systems. Finally, our algorithm allows for the direct feedback of massive tactile sensor data for a broad variety of tactile sensors and tactile displays, thereby, enables the compressed yet intuitive representation of massive tactile sensor information for real-time applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.