Abstract

3D virtual humans and physical human-like robots can be used to interact with people in a remote location in order to increase the feeling of presence. In a telepresence setup, their behaviors are driven by real participants. We envision that in the absence of the real users, when they have to leave or they do not want to do a repetitive task, the control of the robots can be handed to an artificial intelligence component to sustain the ongoing interaction. At the point when human-mediated interaction is required again, control can be returned to the real users. One of the main challenges in telepresence research is the adaptation of 3D position and orientation of the remote participants to the actual physical environment to have appropriate eye contact and gesture awareness in a group conversation. In case the human behind the robot and/or virtual human leaves, multi-party interaction should be handed to an artificial intelligence component. In this paper, we discuss the challenges in autonomous multi-party interaction among virtual characters, human-like robots, and real participants, and describe a prototype system to study these challenges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.