Abstract

The presented paper proposes a novel computational model for generating facial expressions that mimic human emotional states. The authors aim to create a system that can generate realistic facial expressions to be used in human-robot interactions. The proposed model is based on the Facial Action Coding System, a widely used tool for describing facial expressions. FACS is used in this study to identify the muscles involved in each facial expression and the degree to which each muscle is activated. Several machine-learning techniques were utilized to learn the relationships between facial muscle activations and emotional states. In particular, a hyperplane classification was employed in the system for facial expressions representing major emotional states. The model’s primary advantage lies in its low computational complexity, which enables it to recognize changes in human emotional states through facial expressions without requiring specialized equipment, such as low-resolution or long-distance video cameras. The proposed approach is intended for use in control systems for various purposes, including security systems or monitoring drivers while operating vehicles. It was investigated that the proposed model could generate facial expressions similar to those produced by humans and that these expressions were recognized as conveying the intended emotional state by human observers. The authors also investigated the effect of different factors on the generation of facial expressions. Overall, the proposed model represents a promising approach for generating realistic facial expressions that mimic human emotional states and could have applications in improving security compliance in sensitive environments. However, carefully considering and managing potential ethical issues will be necessary to ensure the responsible use of this technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call