Abstract

Human–robot interaction was always based on estimation of human emotions from human facial expressions, voice and gestures. Human emotions were always categorized in a discretized manner, while we estimate facial images from common datasets for continuous emotions. Linear regression was used in this study which numerically quantizes human emotions as valence and arousal by displaying the raw images on the two-respective coordinate axis. The face image datasets from the Japanese female facial expression (JAFFE) dataset and the extended Cohn–Kanade (CK+) dataset were used in this experiment. Human emotions for the above-mentioned datasets were interpreted by 85 participants who were used in the experimentation. The best result from a series of experiments shows that the minimum of root mean square error for the JAFFE dataset was 0.1661 for valence and 0.1379 for arousal. The proposed method has been compared with previous methods such as songs, sentences, and it is observed that the proposed method for common datasets testing showed an outstanding emotion estimation performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.