In cinematic VR applications, haptic feedback can significantly enhance the sense of reality and immersion for users. The increasing availability of emerging haptic devices opens up possibilities for future cinematic VR applications that allow users to receive haptic feedback while they are watching videos. However, automatically rendering haptic cues from real-time video content, particularly from video motion, is a technically challenging task. In this article, we propose a novel framework called "Video2Haptics" that leverages the emerging bio-inspired event camera to capture event signals as a lightweight representation of video motion. We then propose efficient event-based visual processing methods to estimate force or intensity from video motion in the event domain, rather than the pixel domain. To demonstrate the application of Video2Haptics, we convert the estimated force or intensity to dynamic vibrotactile feedback on emerging haptic gloves, synchronized with the corresponding video motion. As a result, Video2Haptics allows users not only to view the video but also to perceive the video motion concurrently. Our experimental results show that the proposed event-based processing methods for force and intensity estimation are one to two orders of magnitude faster than conventional methods. Our user study results confirm that the proposed Video2Haptics framework can considerably enhance the users' video experience.
Read full abstract