Abstract

Communication will be intelligible when conveyed message is interpreted in right-minded. Unfortunately, the rightminded interpretation of communicated message is possible for human-human communication but it’s laborious for humanmachine communication. It is due to the inherently blending of non-verbal contents such as emotion in vocal communication which leads to difficulty in human-machine interaction. In this research paper we have performed experiment to recognize emotions like anger, sadness, astonish, fear, happiness and neutral using fuzzy K-Means algorithm from Oriya elicited speech collected from 35 Oriya speaking people aged between 22- 58 years belonging to different provinces of Orissa. We have achieved the accuracy of 65.16% in recognizing above six mentioned emotions by incorporating mean pitch, first two formants, jitter, shimmer and energy as feature vectors for this research work. Emotion recognition has many vivid applications in different domains like call centers, spoken tutoring systems, spoken dialogue research, human-robotic interfaces etc.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call