Abstract

How can the behaviour of humans who interact with other humans be simulated in virtual environments? This thesis investigates the issue by proposing a number of dedicated models, computer languages, software architectures, and specifications of computational components. It relies on a large knowledge base from the social sciences, which offers concepts, descriptions, and classifications that guided the research process. The simulation of nonverbal social interaction and group dynamics in virtual environments can be divided in two main research problems: (1) an action selection problem, where autonomous agents must be made capable of deciding when, with whom, and how they interact according to individual characteristics of themselves and others; and (2) a behavioural animation problem, where, on the basis of the selected interaction, 3D characters must realistically behave in their virtual environment and communicate nonverbally with others by automatically triggering appropriate actions such as facial expressions, gestures, and postural shifts. In order to introduce the problem of action selection in social environments, a high-level architecture for social agents, based on the sociological concepts of role, norm, and value, is first discussed. A model of action selection for members of small groups, based on proactive and reactive motivational components, is then presented. This model relies on a new tagbased language called Social Identity Markup Language (SIML), allowing the rich specification of agents' social identities and relationships. A complementary model controls the simulation of interpersonal relationship development within small groups. The interactions of these two models create a complex system exhibiting emergent properties for the generation of meaningful sequences of social interactions in the temporal dimension. To address the issues related to the visualization of nonverbal interactions, results are presented of an evaluation experiment aimed at identifying the application requirements through an analysis of how real people interact nonverbally in virtual environments. Based on these results, a number of components for MPEG-4 body animation, AML — a tag-based language for the seamless integration and synchronization of facial animation, body animation, and speech — and a high-level interaction visualization service for the VHD++ platform are described. This service simulates the proxemic and kinesic aspects of nonverbal social interactions, and comprises such functionalities as parametric postures, adapters and observation behaviours, the social avoidance of collisions, intelligent approach behaviours, and the calculation of suitable interaction distances and angles.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.