Abstract

This special issue on ‘‘Multisensory Interaction in Virtual Environments’’ is promoted by the European Network of Excellence ENACTIVE (http://www.enactivenetwork.org). It follows the second International Conference on Enactive Interfaces ENACTIVE 2005 (http:// www.enactive2005.org), held in the beautiful historical setting of Casa Paganini, a new centre of excellence of University of Genova (http://www.casapaganini.org), Italy on November 18–19, 2005 and chaired by the guest editors of this issue. The six papers here included followed a peer review process starting from the accepted papers presented at ENACTIVE 2005. The conference was a great success in terms of participation (more than 120 registered participants), and attracted researchers from various disciplines, including computer science and engineering, cognitive sciences, psychology, human factors, and interaction design. The focus of this issue is on multisensory and enactive human–computer interaction in virtual environments, with a special emphasis on new paradigms of interaction. The broader scope is to contribute to the growth of a truly multidisciplinary research community on the new generation of human–computer interfaces called enactive interfaces. Multimodal human–computer interfaces integrate input and output modalities, such as audio, speech, vision, gesture and touch. Enactive interfaces are related to an advance of the multimodal paradigm, a fundamental ‘‘interaction’’ concept which is not yet fully exploited in most of the existing human–computer multimodal interfaces. Enactive knowledge goes beyond multisensory-mediated knowledge: it represents the kind of knowledge ‘‘learned by doing’’, based on the experience of perceptual responses to action, acquired by demonstration and sharpened by practice. According to Varela’s model of ‘‘enactive cognition’’ (Varela et al. 1991), enactive knowledge is primarily ‘‘knowledge for action’’, and conversely, action is always necessary to acquire knowledge (J. Stewart, Enactive knowledge in Enactive Lexicon, http://www.enactivenetwork.org). This type of knowledge transmission can be considered the most direct, in the sense that it is natural and intuitive, since it is based on the experience and on the perceptual responses to motor acts. For example, typical tasks requiring a high degree of enactive knowledge are dancing or playing a musical instrument, moulding shapes in clay or engraving, driving a car or suturing a wound by a surgeon. As a consequence of the above assumptions, physical embodiment is a necessary condition for the acquisition of enactive knowledge: this consideration is particularly relevant for Virtual Reality since the level of immersion and the degree of interaction that is established within the environment is determined by the extent that sensory modalities responses can be mimicked in the simulated scenario. Human–computer interfaces should be able to display to the human subject the possibilities for actions available in the environment, to evoke them as natural (Gibson’s affordances), actions that are relevant to the organism’s goal-directed behavior. Evoking affordances in virtual environments is linked to the possibility of displaying appropriate correlated multisensorial stimuli to the subject: the immersiveness of the experience alone does not in fact necessarily lead to a natural interaction, if the user cannot perceive what can be done via the interface (Gross et al. 2005). Even if up to now HCI technologies have not fully exploited the potential of enactive multisensory-mediated knowledge, recent technological advances—e.g. in gesture tracking, in sensory integration, and in improved 3D frameworks—have created the possibility in virtual environments to significantly enhance naturally display possibilities for action and for direct manipulation of data, overall guaranteeing the capacity for continuing action and enaction in the environment. A. Frisoli (&) PERCRO, Scuola Superiore Sant’Anna, Pisa, Italy E-mail: a.frisoli@sssup.it http://www.percro.org

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call