Abstract

We present a glasses type wearable device to detect emotions from a human face in an unobtrusive manner. The device is designed to gather multi channel responses from the user face naturally and continuously while the user is wearing it. The multi channel responses include physiological responses of the facial muscles and organs based on electrodermal activity (EDA) and photoplethysmogram. We conducted experiments to determine the optimal positions of EDA sensors on the wearable device because EDA signal quality is very sensitive to the sensing position. In addition to the physiological data, the device can capture the image region representing local facial expressions around the left eye via a built in camera. In this study, we developed and validated an algorithm to recognize emotions using multi channel responses obtained from the device. The results show that the emotion recognition algorithm using only local facial expressions has an accuracy of 78 percent at classifying emotions. Using multi channel data, this accuracy was increased by 10.1 percent. This unobtrusive wearable system with facial multi channel responses is very useful for monitoring a user emotions in daily life, which has a huge potential for use in the healthcare industry.

Highlights

  • Emotion recognition is a technology to predict people’s emotional states based on user responses such as verbal or facial expressions [1]; this technology can be applied in various fields, such as health care [2], [3], gaming [4], and education [5], [6]

  • The results indicate that the female participants show better emotion recognition rates for facial expressions, which is consistent with the results in [40], which is implying that women use facial expressions more frequently than men

  • Multi-modal wearable device has a strength that can be applied to various situation in real life. beside the improvement of recognition accuracy, using two modalities have an advantage for example; when situation that the facial expression didn't reveal effectively, such as huge illumination change, the biosignal could compensate the facial expression, on the contrary, the facial expression could compensate the biosignal in some condition that the biosignal couldn't work effectively such as a condition that user having a cognitive stress

Read more

Summary

Introduction

Emotion recognition is a technology to predict people’s emotional states based on user responses such as verbal or facial expressions [1]; this technology can be applied in various fields, such as health care [2], [3], gaming [4], and education [5], [6]. Many people in workplaces experience physical activities and cognitive stress, which affect their biosignals; using only the bio-signal may not be reliable [11]. In this case, it would be desirable to use additional modalities to obtain more reliable emotional information

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.