Abstract

To enable intelligent agents interacting smoothly with human users, researchers have been deploying novel interaction modalities (e.g. non-verbal cue, vision and touch) in addition to agents’ conversational skills. Models of multi-modality interaction can enhance agents’ real-time perception, cognition and reaction towards the user. In this paper we report a novel tele-immersive interaction system developed using real-time 3D modelling techniques. In such system user’s full body is reconstructed using multi-view cameras and CUDA based visual hull reconstruction algorithm. User’s mesh model is then loaded into a virtual environment for interacting with an autonomous agent. Technical details and initial results of the system are illustrated in this paper. Following that a novel interaction scenario is proposed which links the virtual agent with a remote physical robot who takes the role of mediating interactions between two geographically separated users. Finally we discuss in depth the implications of such human-agent interaction and possible future improvements and directions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.