Abstract

Speech is an effective way for analyzing mental and psychological health of a speaker's. Automatic speech recognition has been efficiently investigated for human-computer interaction and understanding the emotional & psychological anatomy of human behavior. Emotions and personality are studied to have a strong link while analyzing the prosodic speech parameters. The work proposes a novel personality and emotion classification model using PSO (particle swarm optimization) based CNN (convolution neural network): (NPSO) that predicts both (emotion and personality) The model is computationally efficient and outperforms language models. Cepstral speech features MFCC (mel frequency cepstral constants) is used to predict emotions with 90% testing accuracy and personality with 91% accuracy on SAVEE(Surrey Audio-Visual Expressed Emotion) individually. The correlation between emotion and personality is identified in the work. The experiment uses the four corpora SAVEE, RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song), CREMAD (Crowd-sourced Emotional Multimodal Actors Dataset, TESS (Toronto emotional speech set) corpus, and the big five personality model for finding associations among emotions and personality traits. Experimental results show that the classification accuracy scores for combined datasets are 74% for emotions and 89% for Personality classifications. The proposed model works on seven emotions and five classes of personality. Results prove that MFCC is enough effective in characterizing and recognizing emotions and personality simultaneously.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.