Abstract

Data-driven approaches are commonly used to model and render haptic textures for rigid stylus-based interaction. Current state-of-the-art data-driven methodologies synthesize acceleration signals through the interpolation of samples with different input parameters based on neural networks or parametric spectral estimation methods. In this paper, we see the potential of emerging deep learning methods in this area. To this end, we designed a complete end-to-end data-driven framework to synthesize acceleration profiles based on the proposed deep spatio-temporal network. The network is trained using contact acceleration data collected through our manual scanning stylus and interaction parameters, i.e., scanning velocities, directions, and forces. The proposed network is composed of attention-aware 1D CNNs and attention-aware encoder-decoder networks to adequately capture both the local spatial features and the temporal dynamics of the acceleration signals, which are further augmented with attention mechanisms that assign weights to the features according to their contributions. For rendering, the trained network generates synthesized signals in real-time in accordance with the user's input parameters. The whole framework was numerically compared with existing state-of-the-art approaches, showing the effectiveness of the approach. Additionally, a pilot user study is conducted to demonstrate subjective similarity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.