Abstract

Data-driven approaches are often utilized to model and generate vibrotactile feedback and sounds for rigid stylus-based interaction. Nevertheless, in prior research, these two modalities were typically addressed separately due to challenges related to synchronization and design complexity. To this end, we introduce a novel multimodal multitask deep learning framework. In this paper, we developed a comprehensive end-to-end data-driven system that encompasses the capture of contact acceleration signals and sound data from various texture surfaces. This framework introduces novel encoder-decoder networks for modeling and rendering vibrotactile feedback through an actuator while routing sound to headphones. The proposed encoder-decoder networks incorporate stacked transformers with convolutional layers to capture both local variability and overall trends within the data. To the best of our knowledge, this is the first attempt to apply transformer-based data-driven approach for modeling and rendering of vibrotactile signals as well as sounds during tool-surface interactions. In numerical evaluations, the proposed framework demonstrates a lower RMS error compared to state-of-the-art models for both vibrotactile signals and sound data. Additionally, subjective similarity evaluation also confirm the superiority of proposed method over state-of-the-art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call