Abstract

Affective computing focuses on recognizing emotions using a combination of psychology, computer science, and biomedical engineering. With virtual reality (VR) becoming more widely accessible, affective computing has become increasingly important for supporting social interactions on online virtual platforms. However, accurately estimating a person’s emotional state in VR is challenging because it differs from real-world conditions, such as the unavailability of facial expressions. This research proposes a self-training method that uses unlabeled data and a reinforcement learning approach to select and label data more accurately. Experiments on a dataset of dialogues of VR players show that the proposed method achieved an accuracy of over 80% on dominance and arousal labels and outperformed previous techniques in the few-shot classification of emotions based on physiological signals.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.