Abstract

The paper proposes a concept of embodying textual messages based on virtual or physical robotic avatar(s), to efficiently express non-verbal information embedded in contexts. With the concept, textual messages are expressed into speeches, facial expressions, and body languages through a robotic avatar. Interfaces constructed under such a concept are expected to have higher efficiency in exchanging information embedded in contexts than text-based interfaces do. A prototype based on virtual avatar(s) is developed. The avatar is constructed by regular surfaces. Body languages and some simple facial expressions are obtained by controlling movement of links and configuring body language functions. The system can generate one or multiple avatars for several intended applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.