This issue contains five papers. In the first paper, Ege Tekgün, Muhtar Çağkan Uludağlı, Hüseyin Akcan, and Burak Erdeniz, from Izmir University of Economics in Turkey assess the influence of virtual avatar anthropomorphism and the synchronicity of the visuo-tactile stimulation on self-location using a virtual reality (VR) full-body illusion (FBI) experiment. During the experiment, half of the 36 participants observed a gender-matched full-body humanoid avatar from a first-person perspective (1PP) and the other half observed a less anthropomorphic full-body cubical avatar from 1PP while they were receiving synchronous and asynchronous visuo-tactile stimulation. Results show a significant main effect of the synchronicity of the visuo-tactile stimulation and avatar body type on self-location but no significant interaction was found between them. Moreover, the results of the self-report questionnaire provide additional evidence showing that participants who received synchronous visuo-tactile stimulation, experienced not only greater changes in the feeling of self-location, but also, increased ownership, and referral of touch. In the second paper, Ke Li, Qian Zhang, Jinyuan Jia, from Tongji University in Shanghai, China, and Hantao Zhao, from Southeast University in Nanjing, all in China discuss the technology of presenting building information modeling (BIM) with an online platform and the difficulty to display large-scale BIM scenes in a flawless manner on mobile browsers, due to network bandwidth and browser performance limitations. The authors propose CEBOW, a Cloud-Edge-Browser Online architecture for visualizing BIM components with online solutions. The method combines transmission scheduling, cache management, and optimal initial loading into a single system architecture. For network transmission testing, BIM scenes are used, and the results show that their method effectively reduces scene loading time and networking delay while improving the visualization effect of large-scale scenes. In the third paper, Yuzhu Dong and Eakta Jain, from University of Florida in Gainesville, and Sophie Jörg, from Clemson University, all in United States discuss how the importance of eyes for virtual characters stems from the intrinsic social cues. They emphasize that the eye animation impacts the perception of an avatar's internal emotional state. They present three large scale experiments that investigate the extent to which viewers can identify if an avatar is scared. The authors find that participants can identify a scared avatar with 75% accuracy using cues in the eyes including pupil size variation, gaze, and blinks. Because eye trackers return pupil diameter in addition to gaze, their experiments inform practitioners that animating the pupil correctly will add expressiveness to a virtual avatar with negligible additional cost. These findings also have implications for creating expressive eyes in intelligent conversational agents and social robots. The fourth paper, by Osman Güler, from TUSAŞ Şehit Hakan Gülşen Vocational and Technical Anatolian High School in Ankara and Serkan Savaş, from Çankırı Karatekin Üniversitesi, both in Turkey present a study showing that Interactive Boards (IBs) have the necessary hardware to run Stereoscopic 3D (S3D) training materials, but the panel has not got an S3D imaging feature. Therefore, only the Anaglyph S3D imaging method can be applied to IBs. Thus, an Anaglyph S3D training material was prepared for the interaction of the skeletal system and interactive 3D material design for IBs with its effects in education was investigated. A Likert-type scale was developed to measure the usability of the training material on IBs and the material was evaluated by 20 experts. The data were analyzed by the SPSS statistical program and the results were interpreted. According to the results, educational material seems to be positive in terms of image characteristics, content, navigation, and ease of use, font sizes were moderate for readability, the feedback process and the help menu were moderately effective. The last paper by Alexandra Sierra, Marie Postma, from Tilburg University in Netherlands and Menno Van Zaanen, from North-West University in Potchefstroom, South Africa investigate whether the uncanny valley effect, which has already been found for the human-like appearance of virtual characters, can also be found for animal-like appearances. They conducted an online study in which six different animal designs were evaluated in terms of the following properties: familiarity, commonality, naturalness, attractiveness, interestingness, and animateness. The study participants differed in age (under 10–60 years) and origin (Europe, Asia, North America, and South America). For the evaluation of the results, the authors ranked the animal-likeness of the character using both expert opinion and participant judgments. They also investigated the effect of movement and morbidity. The results confirm the existence of the uncanny valley effect for virtual animals, especially with respect to familiarity and commonality, for both still and moving images. No uncanny valley effect was detected for interestingness and animateness.
Read full abstract