Abstract
Among dimensional models of emotions, two- and three-dimensional are most popular, while the true dimension of affective space is a matter of debates. Here we study the inherent dimension of the emotion space represented in facial expressions, along with the mapping of electromyography (EMG) signals recorded from facial muscles to expressed emotions. For this purpose, an experiment was conducted with parallel EMG recording from three facial muscles (Zygomaticus Major, Corrugator Supercilii, and Masseter) and video registration of the face with automated emotion recognition from the video stream. Data analysis based on machine learning methods confirmed the 3D nature of the affective space (at least its part reflected in facial expressions). This result is consistent with the VAD and PAD models. Possibilities of accounting for complex, higher-order, or social emotions without introducing additional dimensions are discussed. The second finding of this study is the ability to reconstruct all three significant principal components of expressed affects using EMG signals recorded from three facial muscles with the help of machine learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.