Abstract

In order to add adaptability and user-friendliness to human computer interfaces, the classification and recognition of a user's emotional state has evolved to a significant topic of interest within the research on natural spoken dialogue systems. In this article we pick up the idea of using hidden Markov models (HMMs) to recognize emotions from speech signals and we integrate these recognition results in adaptive dialogue management. At first we give an overlook on different characteristics of selected emotions with respect to the features extracted from the speech signal and we describe the emotion recognizer. Then we highlight our approaches to improve the quality of the recognizer models and we show how the recognizer's results are used to adapt a dialogue system's behavior to the user's emotional state

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call