Abstract

This article contributes to our understanding of how participants use different resources to accomplish word explanations in social virtual reality (VR). The article draws on conversation analysis to examine audio-visual data of interaction on the Rec Room VR platform. A view of the physical space the participants inhabit has also been captured. There are twelve participants, and they have minimal experience with social VR. English is used as a lingua franca. The focus is on participants’ use of environmentally coupled gestures (EnCGs) during word explanation activity. The activity has two or more participants playing a word-guessing game, in which one participant explains a word using drawings and gesture as well as speech. Findings show that EnCGs that feature elements in the environment are more readily interpretable than EnCGs that feature elements over the avatar body. The latter can result in situations in which achieving the goal of a word explanation activity (correct guess) can be difficult. In addition, the explainer’s orientation to their physical body and the recipient’s orientation to the virtual body during the joint word explanation activity can create situations in which the gestures become difficult to interpret for the recipient. To conclude, the observations in this article reveal the importance of the alignment of virtual and physical gestures for the intelligibility of gesture in VR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call