Abstract

The article proposes an original convolutional neural network (CNN) for solving the problem of the automatic voice-based assessment of a person’s emotional state. Key principles of such CNNs, and state-of-theart approaches to their design are described. A model of one-dimensional (1-D) CNN based on the human’s inner ear structure is presented. According to the given classification estimates, the proposed CNN model is regarded to be not worse than the known analogues. The linguistic robustness of the given CNN is confirmed; its key advantages in intelligent socio-cyberphysical systems is discussed. The applicability of the developed CNN for solving the problem of voice-based identification of human’s destructive emotions is characterized by the probability of 72.75%.

Highlights

  • At present, the interaction between a human and the digital environment cannot be overemphasized [1, 2]

  • The most important component of such digital behavior of a human is the use of social networks, which have been actively developing over the last few years

  • The result of the grouping showed that the proposed neural network is able to determine the gender type with an accuracy of 97% for both men and women on the basis of the given acoustic data (Table 3)

Read more

Summary

Introduction

The interaction between a human and the digital environment cannot be overemphasized [1, 2]. Today most of the popular messengers and social media allow recording and sending audio messages and voice mails, which simplifies and accelerates the information exchange between the users and simultaneously increases the share of acoustic content within the interpersonal interaction in multi-agent socio-cyberphysical systems. In this regard, the research of the voice-based audio materials, which are both shared among the users and publicly available, makes a significant contribution to solving the task of identifying destructive content in the virtual environment

Problem statement
Comparison with the existing analogues
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call