Abstract

The intersection of people, data and intelligent machines has a far-reaching impact on the productivity, efficiency and operations of a smart industry. Internet-of-things (IoT) offers a great potential for workplace gains using the “quantified self” and the computer vision strategies. Their goal is to focus on productivity, fitness, wellness, and improvement of the work environment. Recognizing and regulating human emotion is vital to people analytics as it plays an important role in workplace productivity. Within the smart industry setting, various non-invasive IoT devices can be used to recognize emotions and study the behavioral outcomes in various situations. This research puts forward a deep learning model for detection of human emotional state in real-time using multimodal data from the Emotional Internet-of-things (E-IoT). The proposed multimodal emotion recognition model, MEmoR makes use of two data modalities: visual and psychophysiological. The video signals are sampled to obtain image frames and a ResNet50 model pre-trained for face recognition is fine-tuned for emotion classification. Simultaneously, CNN is trained on the psychophysiological signals and the results of the two modality networks are combined using decision-level weighted fusion. The model is tested on the benchmark Bio Vid Emo DB multimodal dataset and compared to the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.