Abstract

This research presents various methods for audio recordings classification, utilizing convolutional neural network architectures. A convolutional neural network model was developed for recognizing the emotion of speech from audio recordings. To form the dataset of 48 648 recordings used for training the neural network model, RAVDESS, TESS, SAVEE and CREMA-D datasets were used and data augmentation techniques were applied. This included the addition of different types of noise to the audio recordings, varying sound pitch, as well as speeding up and slowing down the audio recordings. The purpose of these techniques was to increase the robustness of the classifier and to enable it to be able to accurately recognize the emotion of audio recordings from any source. Furthermore, the addition of these varied techniques also aimed to ensure that the model could be used in a variety of scenarios, such as call centers, security systems, voice assistants, healthcare and education, to accurately identify the emotional state of the user. After training the neural network model an accuracy of 70,53 % was achieved, demonstrating the success of the model in recognizing the emotion of the audio recordings. These findings may have a wide range of applications, allowing for a more personalized user experience, improved evaluation of user engagement, as well as more accurate and personalized treatment. Additionally, the model could be used to detect if the user is in an abnormal emotional state, thereby preventing certain functions from being accessed, providing a secure environment for the user. This research could open up opportunities for further research in the area of deep learning for audio recognition, allowing for more accurate and personalized models to be developed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call