Abstract

The automatic and real-time recognition of the user's emotional state is a feature that can provide benefits for different areas of Human-Computer Interaction. The scientific literature presents several techniques that can be used to recognize the user's emotional state. However, many techniques involve the use of sensors that can result in financial costs and cause discomfort to the user. In this scenario, the recognition of the emotional state through the analysis of facial expressions presents itself as a useful and practical approach, since it does not involve the use of sensors attached to the user's body and executed in different types of devices. Despite these advantages, software that allow the analysis of facial expressions for free are still incipient, and performance evaluation of this type of software usually is not available. In order to contribute to this context and assist researchers who need this type of software, this study presents a comparative analysis of two open-source emotion recognition software (CLMTrackr and Face-api.js) simulating different environmental conditions related to lighting and distance. Considering images from two datasets, we generate 8675 videos simulating 25 different environmental conditions. Our results indicate that the environmental conditions did not cause major impacts on the accuracy of the software, and CLMTrackr and Face-api.js, presented, respectively, 28% and 64% of average accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call