Abstract

Existing emotional speech recognition applications usually distinguish between a small number of emotions in speech. However this set of so called basic emotions in speech varies from one application to another depending on their according needs. In order to support such differing application needs an emotional speech model based on the fuzzy emotion hypercube is presented. In addition to existing models it supports also the recognition of derived emotions which are combinations of basic emotions in speech. We show the application of this model by a prosody based Hidden Markov Models(HMM). The approach is based on standard speech recognition technology using hidden semi-continuous Markov models. Both the selection of features and the design of the recognition system are addressed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call