Abstract
Emotion recognition is one of the latest challenges for human-computer interaction (HCI). In general, a system approach for recognition of human emotional state is usually from two inputs: audio/speech and visual expressions. So, the emotional recognition system needs two audio-based and image-based kernels to process audio and visual modules. In order to cost down the requirement of the emotional recognition containing two-kernel two-module (TKTM) system, the speech-based kernel can be regarded as an image-based processing. In this paper, we present a novel speech emotional feature extraction based on visual signature extraction derived from time-frequency representation. First, we transform spectrogram as a recognizable image. Next, we use cubic curve to enhance the contrast of speech spectrogram image. Then, the texture image information (TII) derived from spectrogram image can be extracted by using Laws' masks to characterize emotional state. Finally, we use the support vector machine (SVM) to classify emotion. Index Terms—emotional feature extraction, speech emotional recognition, spectrogram, texture image information
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have