Abstract

In human–machine interaction, non-verbal communication through the recognition of facial expressions is a process that operates on a time scale of the order of milliseconds, bringing a new dimension to how machines can affect modern social life. The ambiguity at this time scale is significant, necessitating that humans and machines rely on rich cognitive abilities rather than slow symbolic conclusions, making it necessary for humans and machines to rely on rich perceptual skills rather than sluggish figurative conclusions, especially in critical applications such as elderly care and supportive procedures for people with mobility or communication problems. This research field includes interdisciplinary contributions and specialized support from cognitive areas of psychology, sociology, linguistics, industrial design, and informatics. This paper presents an innovative emotion recognition system through dynamic facial analysis for optimal decision-making in non-verbal communication. It is an innovative model of artificial vision, for the recognition of emotions in the human–machine interaction, combining the look and the basic facial expressions of a person. This approach implements and proposes for the first time in the literature the application of a Common Space Variational Recurrent Deep Embedding (CSVRdE) intelligent learning system. The suggested technique simplifies the process of training customized extraction functions for appropriate image transformations in complicated neural network architectures, resulting in increased learning consistency, superior prediction reliability, and outstanding classification efficiency. Specifically, the suggested technique produces extremely accurate findings without reoccurring difficulties of unknown origin since all characteristics in the dataset are effectively managed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call