Abstract

Background/Objective: Speech is one of the modes for Human Computer Interface (HCI). Speech contains message to convey as well as the speaker characteristics such as speaker identity and emotional state of the speaker. Recently, researchers are taking more interest in the emotional parameters of speech signals which helps to improve the functionality of HCI. This research focus on selecting features which helps to identify the emotion of the speaker. Methods/Statistical Analysis: Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Cepstrum Coefficient (LPCC) and Perceptual Linear Predictive (PLP) methods are used to extract the features. Each emotion is modeled as one Hidden Markov Model (HMM) using Hidden Markov Tool Kit (HTK tool kit). The Beagle Bone Black (BBB) board is chosen for the implementation because of the form factor. Findings: The results indicate that MFCC features gives 100% accuracy for surprise emotion, PLP features gives 100% accuracy for anger emotion and LPCC features give 100% accuracy for fear emotion. Conclusion/Improvement: A hybrid feature extraction method should be devised to detect all emotions with 100% accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.