Abstract

This paper deals with one of the key problems when interacting with virtual environments (VE): finger force feedback during manipulation of virtual objects. In the real world, control of haptic interactions with objects is achieved using the kinesthetic/tactile information. In a VE, the presentation of such information requires a dextrous hand master that provides force feedback to different fingers. The presence of friction and/or time delays and the lack of tactile information in such devices reduce operator performance. In those cases, other sensory channels such as vision and audition could be used to replace (sensory substitution) or to supplement (information redundancy) the haptic channel. In this paper, we present the result of an experimental study aimed at investigating human performance during interactions with virtual objects through different sensory channels. This study was performed using the Rutgers VR distributed system. This system enable the users to receive force informations through the haptic, visual, or auditive channel. The experiment was performed using both partially immersive monoscopic and stereoscopic visual displays. Results indicate that redundant presentation of haptic information greatly increases performance and reduce error rates as compared to the open-loop case (with no force feedback). The best results were obtained where both direct haptic and redundant sound feedback were provided. Increment of task completion time when using visual force feedback suggest an overload of the visual channel.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call