Abstract

Emotion is one of the main characteristics of a human being that make them different from other living beings. 90% of the communication is based totally upon vocals and these vocals show different emotions. It was found that human has 27 different types of emotions and six of them are major like happy, sad, fear, anger, disgust and surprise. The emotion analysis will help not only to understand what a person's emotion at that point of the moment but it will also influence to find various physical and mental conditions like muscular tension, skin elasticity, blood pressure, breathing pattern which help to find the heart condition and many more. This analysis will also help to find the person is mimicking the face or not. In this paper, we have presented a nonlinear analysis and a predictive model of voice emotion by extracting the features of a given user voice. Using the phase space plot method the data is categorized into different parameters which help to extract the feature of the voice signal by measuring the volume of the fitted ellipse on the main cluster. The whole analysis is done using PYTHON software. Using the quantifying parameter as a feature the voice signal is trained using machine learning algorithms and then Fuzzy K-Mean clustering is done to differentiate between multiple emotions. Our experimental results provide a satisfactory conclusion in this context.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.