Abstract
Impressive progress has been made in developing companion Socially Interactive Agents (SIAs) that provide companionship and reduce loneliness. However, recent works focus on analyzing multimodal feedback in Answer part but ignore Question part. Furthermore, research on SIAs is primarily based on English, which poses a challenge for Chinese SIAs because of cultural differences between English and Chinese. Therefore, we introduce a Chinese Natural Audiovisual Multimodal Database (CNAMD) corpus, the first and largest freely available Chinese multimodal database for multi-person interaction, containing 48 hours of videos and annotations across eight modalities. Using CNAMD, we analyze the characteristics of vocal-verbal, audio, behavioral, and multimodal combinations during questioning, test the performance of six baselines on three tasks, and propose improvements for processing daily Chinese data. The present findings will help designers consider Chinese customs and language when designing Chinese SIAs, making them more suitable for the Chinese cultural context and users.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Human–Computer Interaction
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.