IN this special section, we are pleased to present extended versions of four outstanding papers that were originally presented at the IEEE Virtual Reality 2006 Conference (VR 2006). IEEE Virtual Reality is the premier international conference on all aspects of virtual, augmented, and mixed reality. The conference program at VR 2006 consisted of nine sessions on the following topics: perception, simulation and visualization, applications of VR, distributed and collaborative systems, evaluation and user studies, augmented reality, tracking and projection displays, 3D interaction, and haptic and olfactory displays. For this special section, the international program committee selected four excellent papers from the 28 accepted research papers. As always, the choice was difficult since many of the other papers were also excellent candidates. The first paper, by Sean D. Young, Bernard D. Adelstein, and Stephen R. Ellis, received the best paper award at the VR 2006 for its high relevance to the field of virtual reality and simulation. The authors asked the question, “Does taking a motion sickness questionnaire make you motion sick?” Surprisingly, their research indicates that the answer is “yes!” The paper demonstrates that the administration of the questionnaire itself makes the participant aware that the virtual environment may produce motion sickness. The study shows that reports of motion sickness after immersion are much greater when both pre and posttest questionnaires are given than when only a posttest questionnaire is used. Since pretest questionnaires cannot simply be dropped in most cases, the authors suggest a number of ways to reduce this effect and discuss the implications of their observations. Augmented reality (AR) systems, which combine realworld and virtual imagery, present a unique set of perceptual issues for the user. The paper by J. Edward Swan II, Adam Jones, Eric Kolstad, Mark A. Livingston, and Harvey S. Smallman addresses such a problem: the accuracy of depth judgments made by users of optical see-through AR displays. These displays allow users to view the physical world directly, while overlaying virtual objects on the real scene. In many applications, it is critical that the user perceives the virtual objects to be in the correct position relative to the real world, but differences in depth perception between the virtual and real imagery may prevent this. Moreover, measuring the accuracy of users’ depth judgments is not trivial. The authors review previous work and methods used to address this problem, and then present two experiments of their own. The experiments use a perceptual matching technique and a blind walking technique to measure depth judgments, and reveal some interesting and surprising results. An emerging area of research in the VR community focuses on virtual humans. In the past, virtual human research has mainly addressed technical issues—making the virtual characters realistic in appearance, movements, emotions, behaviors, etc. With many of these problems at least partially solved, however, researchers can now begin to evaluate the social aspects of virtual humans; that is, how real users interact with virtual characters. Andrew B. Raij, Kyle Johnsen, Robert F. Dickerson, Benjamin C. Lok, Marc S. Cohen, Margaret Duerson, Rebecca Rainer Pauly, Amy O. Stevens, Peggy Wagner, and D. Scott Lint present a paper along these lines, describing two studies in which medical students interacted with a simulated patient. The simulated patient was either a real person acting the part of a patient, or a virtual human playing this role. The studies show that while the interpersonal interactions with the virtual human were similar to interactions with the real human in many ways, there were also subtle differences in the participants’ nonverbal behavior and attitude toward the virtual human. Such studies are critical for improving our understanding of how to use virtual characters in real-world VR applications. Believable haptic interaction with complex virtual objects is still a challenging research topic. Michael Ortega, Stephane Redon, and Sabine Coquillart have generalized the god-object method to enable high quality haptic interaction with rigid bodies consisting of tens of thousands of triangles. They suggest separating the computation of the motion of the six-degree-of-freedom god-object from the computation of the force applied to the user. The constraintbased force felt by the user can be computed within a few microseconds, which is necessary for the tactile simulation of fine surface details. The force is computed using a novel constraint-based quasistatic approach, which allows the suppression of force artifacts typically found in previous methods. The update of the pose of the rigid god-object is performed within a few milliseconds, which allows visual display at appropriate frame rates. 420 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 13, NO. 3, MAY/JUNE 2007
Read full abstract