Abstract

Virtual humans (VHs)—automated, three-dimensional agents—can serve as realistic embodiments for social interactions with human users. Extant literature suggests that a user’s cognitive and affective responses toward a VH depend on the extent to which the interaction elicits a sense of copresence, or the subjective “sense of being together.” Furthermore, prior research has linked copresence to important social outcomes (e.g., likeability and trust), emphasizing the need to understand which factors contribute to this psychological state. Although there is some understanding of the determinants of copresence in virtual reality (VR) (cf. Oh et al., 2018), it is less known what determines copresence in mixed reality (MR), a modality wherein VHs have unique access to social cues in a “real-world” setting. In the current study, we examined the extent to which a VH’s responsiveness to events occurring in the user’s physical environment increased a sense of copresence and heightened affective connections to the VH. Participants (N = 65) engaged in two collaborative tasks with a (nonspeaking) VH using an MR headset. In the first task, no event in the participant’s physical environment would occur, which served as the control condition. In the second task, an event in the participants’ physical environment occurred, to which the VH either responded or ignored depending on the experimental condition. Copresence and interpersonal evaluations of the VHs were measured after each collaborative task via self-reported measures. Results show that when the VH responded to the physical event, participants experienced a significant stronger sense of copresence than when the VH did not respond. However, responsiveness did not elicit more positive evaluations toward the VH (likeability and emotional connectedness). This study is an integral first step in establishing how and when affective and cognitive components of evaluations during social interactions diverge. Importantly, the findings suggest that feeling copresence with VH in MR is partially determined by the VHs’ response to events in the actual physical environment shared by both interactants.

Highlights

  • Recent advancements in artificial intelligence (AI) and mixed reality (MR) hardware have enabled what industry experts are dubbing as “the age of the virtual human” (Titcombe et al, 2020)

  • Participants who interacted with a Virtual humans (VHs) that nonverbally responded to an event in the shared environment with the user reported higher levels of copresence than those interacting with a VH who ignored the event

  • Our results demonstrate that when interactions occur in MR, cognitive evaluations of a VH, which are present in multidimensional scales of social presence, vary based on contextual responsiveness

Read more

Summary

Introduction

Recent advancements in artificial intelligence (AI) and mixed reality (MR) hardware have enabled what industry experts are dubbing as “the age of the virtual human” (Titcombe et al, 2020). Virtual humans (VH) are automated, computer-generated embodied agents capable of a wide range of human behavior (Lucas et al, 2017) Despite their artificial nature, VHs are largely perceived as social actors in part because of their ability to respond realistically to external cues, including users’ affective states (Nass and Moon, 2000; Becker-Asano and Wachsmuth, 2010). Studies on the efficacy of VHs in such contexts have almost exclusively focused on how agent-specific factors, such as dialogue structure and appearance, contribute to desired social outcomes (e.g., see Chattopadhyay et al, 2020) This overlooks the role of the physical environment shared by interactants in shaping such outcomes (Skjaeveland and Garling, 1997), which becomes salient in MR-based scenarios.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call