Abstract

Affective computing is an imperative topic for Human-Computer Interaction, where user emotions and emotional communication can be utilized to improve the usability of a system. Several strategies are available to detect user emotions but it is questionable when identifying the most suitable and compatible strategy which can be used to detect emotions when using mobile devices. Multimodal emotion recognition paves the path to detect emotions by combining two or more strategies in order to identify the most meaningful emotion. Emotion identification through facial expressions and text analytics has given high accuracies but combining them and practically applying them in the context of a mobile environment should be done. Three prototypes were developed using evolutionary prototyping which can detect emotions from facial expressions and text data, using state of the art APIs and SDKs where the base of the prototypes was a keyboard known as Emotional Keyboard which is compatible with Android devices. Evaluations of Prototype 1 and 2 have been performed based on participatory design and reviewed the compatibility of emotion identification through facial expressions and text data in the mobile context. Evaluation of Prototype 3 should be done in the future and a confusion matrix should be built to verify the accuracies by cross-checking with training and validation accuracies that have been obtained when developing the neural network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call