Abstract

Touch-produced sounds in tool-surface interactions convey rich information about textured surface properties and provide direct feedback about how users interact with the surface. This article presents a statistical learning-based approach for modeling and rendering touch-produced sounds in real time. We apply a data-driven modeling method, which recreates highly realistic sounds using audio signals recorded from unconstrained tool-surface interactions. The recorded sound is segmented, and each segment is labeled with the average velocity during that time. We model each segment with wavelet tree models using a moving window approach. Each window is analyzed by fast wavelet transform and is then broken down into a tree structure. During rendering, we use the user's current velocity to select tree models and synthesize new sounds by breadth-first search and inverse wavelet transform. We conducted a user study to evaluate the realism of our virtual sounds and their effect on human's perception of the texture dimensions in the presence of simultaneous real haptic cues. The results showed that in the presence of haptic cues, the virtual sound can more completely capture the texture's roughness and hardness than haptic cues alone. However, the perception on slipperiness depended mainly on touch.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call