Abstract

In this paper the speech emotion verification using two most popular methods in speech processing and analysis based on the Mel-Frequency Cepstral Coefficient (MFCC) and the Gaussian Mixture Model (GMM) were proposed and analyzed. In both cases, features for the speech emotion were extracted using the Short Time Fourier Transform (STFT) and Short Time Histogram (STH) for MFCC and GMM respectively. The performance of the speech emotion verification is measured based on three neural network (NN) and fuzzy neural network (FNN) architectures; namely: Multi Layer Perceptron (MLP), Adaptive Neuro Fuzzy Inference System (ANFIS) and Generic Self-organizing Fuzzy Neural Network (GenSoFNN). Results obtained from the experiments using real audio clips from movies and television sitcoms show the potential of using the proposed features extraction methods for real time application due to its reasonable accuracy and fast training time. This may lead us to the practical usage if the emotion verifier can be embedded in real time applications especially for personal digital assistance (PDA) or smart-phones.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.