Abstract

Emotion recognition from speech signals is still a challenging task. Hence, proposing an efficient and accurate technique for speech-based emotion recognition is also an important task. This study is focused on four basic human emotions (sad, angry, happy, and normal) recognition using an artificial neural network that can be detected through vocal expressions resulting in more efficient and productive machine behaviors. An effective model based on a Bayesian regularized artificial neural network (BRANN) is proposed in this study for speech-based emotion recognition. The experiments are conducted on a well-known Berlin database having 1470 speech samples carrying basic emotions with 500 samples of angry emotions, 300 samples of happy emotions, 350 samples of a neutral state, and 320 samples of sad emotions. The four features Frequency, Pitch, Amplitude, and formant of speech is used to recognize four basic emotions from speech. The performance of the proposed methodology is compared with the performance of state-of-the-art methodologies used for emotion recognition from speech. The proposed methodology achieved 95% accuracy of emotion recognition which is highest as compared to other states of the art techniques in the relevant domain.

Highlights

  • In the modern age of technology, Emotion recognition from speech is a hot research topic in the field of speech signal processing [1]

  • As this work aims at 4 emotions States as Angry, Sad, Normal, and Happy, four utterances are filtered and used the Feature Extraction Module to generate the Dataset to train Bayesian Regularized Artificial Neural Network (BRANN)

  • The experimental results of speechbased emotion recognition through Bayesian Regularized Artificial Neural Network on the Berlin database of emotional datasets have produced an efficient performance as compared to the state-of-the-art techniques

Read more

Summary

Introduction

In the modern age of technology, Emotion recognition from speech is a hot research topic in the field of speech signal processing [1]. There is a gap between humans and computers, that computers act logically and humans act logically as well as emotionally This gap makes the computers less compatible with humans. There are some well-known emotions like anger, happy, sad, and neutral [4] that can affect speech signals. There are different features of sound like frequency, amplitude, pitch, and format that has been in use for emotion recognition from speech signal [5]. The objective of this research is to review emotion recognition techniques with recognition accuracy and to produce an emotion recognition system to recognize four basic emotions anger, sad, happy, and neutral using speech signals

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call