Impressive progress has been made in developing companion Socially Interactive Agents (SIAs) that provide companionship and reduce loneliness. However, recent works focus on analyzing multimodal feedback in Answer part but ignore Question part. Furthermore, research on SIAs is primarily based on English, which poses a challenge for Chinese SIAs because of cultural differences between English and Chinese. Therefore, we introduce a Chinese Natural Audiovisual Multimodal Database (CNAMD) corpus, the first and largest freely available Chinese multimodal database for multi-person interaction, containing 48 hours of videos and annotations across eight modalities. Using CNAMD, we analyze the characteristics of vocal-verbal, audio, behavioral, and multimodal combinations during questioning, test the performance of six baselines on three tasks, and propose improvements for processing daily Chinese data. The present findings will help designers consider Chinese customs and language when designing Chinese SIAs, making them more suitable for the Chinese cultural context and users.