Abstract

Speech one of the biometric characteristic owned by human being, as well as fingerprint, DNA, retina of the eyes and so not the two human beings who have the same voice. Human emotion is a matter that can only be predicted through the face of a person, or from the change of facial expression but it turns out human emotions can also be detected through the spoken voice. Someone emotion are happy, angry, neutral, sad, and surprise can be detected through speech signal. The development of voice recognition system is still running at this moment. So that ini this research, the analysis of someone emotion through speech signal. Some related research about the sound aims to have process of identity recognition gender recognition, Emotion recognition based on conversation. In this research the writer does research on the emotional classification of speech two classes started from happy, angry, neutral, sad and surprise while the used algorithm in this research is SVM (Support Vector Machine) with alghoritmMFCC (Mel-frequency cepstral coefficient)for extraction where it contains filter process that adapted to human’s listening. The result of the implementation process of both algorithms gives accuracy level ashappy=68.54%, angry=75.24%, neutral=78.50%, sad=74.22% and surprise=68.23%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.