Abstract

Continuous emotion recognition has been a compelling topic in affective computing because it can interpret human emotions subtly and continuously. Existing studies have achieved advanced emotion recognition performance using multimodal knowledge. However, these studies generally ignore the circumstances where some particular modalities are missing in the inference phase and thus become sensitive to the absence of modalities. To resolve this issue, we propose a novel multimodal shared network with a cross-modal distribution constraint, i.e. the DS-Net, which aims to improve the robustness of the model to missing modalities. The training process of the proposed network generally includes two components: multimodal shared space modeling and a cross-modal distribution matching constraint. The former utilizes the local and temporal information of multimodal signals for multimodal shared space modeling, while the latter further enhances the multimodal shared space via a loose constraint method. Coupled with the latter, the former can effectively exploit the complementarity between videos and peripheral physiological signals (PPSs), thus enhancing the discriminative capability of the shared space. Based on the shared space, the DS-Net works during the inference phase with only one modality input and can leverage multimodal knowledge to improve emotion recognition accuracy. Comprehensive experiments were conducted on two public datasets. Results demonstrate that the proposed method is competitive or superior to the current state-of-the-art methods. Further experiments indicate that the proposed method can be extended to handle other modalities and to deal with partially missing modalities, demonstrating its potential in real-world applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call