Abstract

This issue contains six papers. In the first paper, Hui Liang, Jian Chang, and Shujie Deng, from the National Centre for Computer Animation in Bournemouth, UK; Can Chen from Changzhou University, China; Ruo-feng Tong, from Zhejiang University, Hangzhou, China; and Jian J. Zhang, from Bournemouth University, UK, design an immersive storytelling environment that allows multiple players to use naturally interactive hand gestures to manipulate virtual puppetry for assisting narration. A set of multimodal interaction techniques is presented for a hybrid user interface that integrates existing 3D visualization and interaction devices including head-mounted displays and depth motion sensor. In the second paper, Sybren A. Stüvel, Frank van der Stappen, and Arjan Egges, from Universiteit Utrecht, The Netherlands, present an investigation into the accuracy of human observers with regard to the recognition of collisions between virtual characters. They show the result of two user studies, where participants classify scenarios as “colliding” or “not colliding”; a pilot study investigates the perception of static images, whereas the main study expands on this by employing animated videos. In the pilot experiment, they investigated the effect of two variables on the ability to recognise collisions: distance between the character meshes, and visibility of the inter-character gap. In the main experiment, they investigate the angle between the character paths and the severity of the (near) collision. In the third paper, Marios Andreas Kyriakou, from the University of Cyprus; Xueni Pan, from the University of London, UK; and Yiorgos Lambros Chrysanthou, from the University of Cyprus examine attributes of virtual human behavior that may increase the plausibility of a simulated crowd and affect the user's experience in Virtual Reality (VR). Purpose-developed experiments in both Immersive and semi-Immersive VR systems queried the impact of collision and basic interaction between real users and the virtual crowd and their effect on the apparent realism and ease of navigation within VR. Participants' behavior and subjective measurements indicated that facilitating collision avoidance between the user and the virtual crowd makes the virtual characters, the environment, and the whole VR system appear more realistic and lifelike. In the fourth paper, Mihai Polceanu and Cedric Buche, from Florida International University, and Lab-STICC - ENIB, CERV, Plouzané, France, introduce a study of existing approaches that explicitly use mental simulation. Current implementations of the mental simulation paradigm, taken together, computationally address many aspects suggested by cognitive science research. Agents are able to find solutions to nontrivial scenarios in virtual or physical environments. Existing systems also learn new behavior by imitation of others similar to them and model the behavior of different others with the help of specialized models, culminating with the collaboration between agents and humans. Approaches that use self models are able to mentally simulate interaction and to learn about their own physical properties. In the fifth paper, Yuxing Qiu, Lipeng Yang, Shuai Li, and Qing Xia, from Beihang University, Beijing, China; Hong Qin, from Stony Brook University, USA; and Aimin Hao, from Beihang University, Beijing, China, advocate a method for the modeling and enhancement of scale-sensitive fluid details. The core of their method is the coupling of multilayer depth regression analysis and FLIP fluid simulation. First, they capture the depth buffer of the fluid surface from the top of scene. Second, they employ depth peeling to decompose the target fluid volume into multiple layers and conduct time–space analysis over surface layers. Third, they propose a logistic regression-based model to pinpoint the interacting regions, wherein multiple detail-relevant factors are taken into account. Finally, details are enhanced by animating extra diffuse materials and augmenting the air–fluid mixing phenomenon. Abdullah Bulbul, from Yildirim Beyazit Universitesi, Ankara, Turkey, and Rozenn Dahyot, from Trinity College, Dublin, Ireland, propose to automatically populate geo-located virtual cities by harvesting and analyzing online contents shared on social networks and websites. They show how pose and motion paths of agents can be realistically rendered using information gathered from social media. 3D cities are automatically generated using open source information available online. To provide a final rendering of both static and dynamic urban scenes, they use Unreal game engine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call