Abstract

Three-dimensional Virtual Environments (VEs) enable the acquisition of knowledge on a given domain through the interaction with virtual entities. The flexibility of a VE allows to represent only the part of the world that is considered relevant for the final user: the proper choice of the information, representation and rendering included in the virtual world can strongly simplify the perception and interpretation efforts required to the users. Moreover, VE can provide data that would be difficult or impossible to appreciate in the real world in an easily and simply perceivable way: domain experts can communicate specific views and interpretations of the reality in a way accessible to final users. A properly designed virtual experience can significantly improve and simplify several learning tasks. Organizing information in three dimensions and designing techniques to interact with them require a complex effort: interaction metaphors have been introduced to facilitate the access and interaction with VEs (Bowmann, 2001). A metaphor is the process of mapping a set of correspondences from a source domain to a target domain (Lakoff & Johnson, 1980). Metaphors help designers to map features of the interaction techniques to concepts more immediately accessible to final users. Interaction can be made more immersive and engaging by multi-modality. A multimodal system coordinates the processing of multiple natural input modalities—such as speech, touch, hand gestures, eye gaze and head and body movements—with multimedia system output (Oviat, 1999). The interaction is carried out with advanced input/output devices involving different sensorial channels (sight, hear, touch, etc.) in an integrated way. Spatial input devices (such as trackers, 3D pointing devices, gesture and vocal devices) and multisensory output technologies (head mounted displays, spatial audio and haptic devices) are increasingly being used as common components of Virtual Reality applications. Each device addresses a particular sense and exhibits a different interface: (Bowmann et al., 2004) offers a broad review of multimodal interaction while (Salisbury, 2004) is a good introduction to haptics. A multimodal interaction requires data to be redundant and polymorphous to address different sensorial modalities at the same time (Jacobson, 2002). Metaphors effectiveness strongly depends on the sensory channels they refer to and on the users characteristics. Therefore this presentation will correlate and compare hapto-acoustic

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call