Abstract

This chapter uses multilingual emotional speech as the research object, induces experimenters’ emotional states through noise simulations and watching videos, and establishes multilingual emotional speech database including Chinese, English, Japanese, Korean, and Russian. This chapter also analyzes acoustic characteristics of multilingual speech signals in different emotional states, extracts prosodic parameters, compares acoustic characteristics of emotional speech in different languages, and concludes the general rule of emotional speech features. In this chapter, Gaussian mixture model method is used separately to do emotion recognition experiments in which multilingual speech achieved is 80%. At the same time, this chapter marks emotional vocabularies, extracts emotional keywords information of statements, combines prosodic features with semantic information, and recognizes emotional speech. Finally, the experiment results show that emotion recognition rate with semantic information is about three percentage points higher than recognition rate with only prosodic features.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call