Abstract
This paper proposes a novel real-time non-verbal communication system from natural language instruction by introducing an artificial intelligence method into the networked virtual environment (NVE). We extract semantic information as an interlingua from the input text by natural language processing, and then transmit this semantic feature extraction (SFE), which actually is a parameterized action representation, to the 3-D articulated humanoid models prepared in each client in remote locations. Once the SFE is received, the virtual human will be animated by the synthesized SFE. Experiments between Japanese sign language and Chinese sign language show this system makes the real-time animations of avatars available for the participants when chatting with each other, not just based on text or predefined gesture icons, so the communication is more natural. This proposed system is suitable for sign language distance training as well.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.