Abstract

Intelligent virtual agents (VAs) already support us in a variety of everyday tasks such as setting up appointments, monitoring our fitness, and organizing messages. Adding a humanoid body representation to these mostly voice-based VAs has enormous potential to enrich the human–agent communication process but, at the same time, raises expectations regarding the agent’s social, spatial, and intelligent behavior. Embodied VAs may be perceived as less human-like if they, for example, do not return eye contact, or do not show a plausible collision behavior with the physical surroundings. In this article, we introduce a new model that extends human-to-human interaction to interaction with intelligent agents and covers different multi-modal and multi-sensory channels that are required to create believable embodied VAs. Theoretical considerations of the different aspects of human–agent interaction are complemented by implementation guidelines to support the practical development of such agents. In this context, we particularly emphasize one aspect that is distinctive of embodied agents, i.e., interaction with the physical world. Since previous studies indicated negative effects of implausible physical behavior of VAs, we were interested in the initial responses of users when interacting with a VA with virtual–physical capabilities for the first time. We conducted a pilot study to collect subjective feedback regarding two forms of virtual–physical interactions. Both were designed and implemented in preparation of the user study, and represent two different approaches to virtual–physical manipulations: (i) displacement of a robotic object, and (ii) writing on a physical sheet of paper with thermochromic ink. The qualitative results of the study indicate positive effects of agents with virtual–physical capabilities in terms of their perceived realism as well as evoked emotional responses of the users. We conclude with an outlook on possible future developments of different aspects of human–agent interaction in general and the physical simulation in particular.

Highlights

  • Though personal digital assistants have become widespread in the context of smart homes as well as professional environments, most of the current implementations rely on audio output, displayed text or simple graphics only

  • We introduced the concept of blended agents—virtual agents (VAs) that are capable of influencing their virtual surroundings and of performing virtual–physical interactions [4]

  • As we intend to provide a virtual 3D body representation for VAs that are embedded into a user’s daily life, we focus on interactions between an intelligent blended agents (IBAs) and its real-world surroundings

Read more

Summary

Introduction

Though personal digital assistants have become widespread in the context of smart homes as well as professional environments, most of the current implementations rely on audio output, displayed text or simple graphics only. Augmented reality (AR) technology can add a new dimension to existing agents by providing a humanoid 3D virtual body to complement the voice. Human-like AR representations can enrich the communicative channels that convey the agent’s status and intentions to humans with gestures and other forms of social behaviors. They can be registered spatially with their environment, which enables a more direct form of spatial interaction compared to voice-only interaction [1]. Several research projects addressed the question of whether and how agent embodiment affects the social interaction between virtual agents (VAs) and human communication partners.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call