Abstract

The lack of understanding of the character essence of avatars brings about limitations in interaction with their expressions. With the state-of-the art CAD standard and virtual reality, our approach is to explore the different paradigm of virtual architecture focusing on social interaction through avatars’ gesture expression. The method to classify context-aware expression data model and to autonomously activate it in real-time along with users’ communication is investigated. A domain of virtual office has been chosen as our study model based on our previous research. To achieve our goals, an avatar expression agent is developed based on our previous context-aware architectural simulation platform so called ‘V-PlaceSims’. The output is delivered as a Web page with an ActiveX control embedded to simulate and to evaluate through the internet. As users communicate with one another using a text box, the avatar expression agent simultaneously detects the users’ emotion by parsing the input texts to perform related gestures automatically. The result reveals a new convenience way to communicate with the other users with an enhancement in automated expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call