Abstract

Previous studies have indicated that different vocal emotions in American English and Mandarin Chinese have distinct acoustic profiles, but the acoustic profiles of vocal emotions have not been established for each language. This experiment analyzes recorded sentences produced in five emotions by 10 native English speakers and 10 native Mandarin Chinese speakers from a novel emotional speech database (Zhou et al., 2021). To better understand the acoustic characteristics of the emotional utterances in both languages, this study used the feature extraction toolkit openSMILE to extract 6373 features (Schuller et al., 2016). Principal component analysis was applied to the extracted features for feature selection. Linear mixed-effects regression models were performed to determine the effect of emotions on the selected features. Bayesian multinomial logistic regression models were performed to examine the effects of acoustic features on emotions. The results suggested that American English and Mandarin Chinese exhibit different acoustic patterns of vocal emotions, but some features, particularly pitch-related features, are used by both languages to express vocal emotions. However, pitch variation may be more restricted in the vocal emotions of Mandarin Chinese relative to American English.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call